PRIMECLUSTER Global File Services Configuration and Administration Guide 4.2 (Solaris(TM) Operating Environment) |
Contents Index |
Appendix E List of Messages | > E.3 GFS Shared File System Daemon messages |
This section explains the error messages of each GFS Shared File System daemon.
sfcfrmd daemon is not performed by super user
Please start sfcfrmd daemon by sfcfrmstart(1M) command as a super user.
The sfcfrmd daemon is direct performed.
Please start sfcfrmd daemon by sfcfrmstart(1M) command as a super user.
Since there was injustice or file reading of pathname went wrong, sfcfrmd daemon starting went wrong.
Please check whether the file of pathname is set up correctly.
The sfcfrmd daemon failed to start, because a working file string in /var/opt/FJSVsfcfs was broken.
Make sure that /var/opt directory has a space by df(1M) command and re-start the node.
The sfcfrmd daemon failed to start, because /etc/services has no entry of sfcfsrm or pathname is broken.
Make sure that each /etc/services file has the entry "sfcfsrm 9200/tcp in all the nodes in the cluster. If not, add it. For details of this file, see the services(4) of the "Solaris X Reference Manual Collection."
If the setting above is right, pathname seems broken. Collect the diagnostic data with fjsnap and contact local Customer Support.
Since acquisition of a memory went wrong, sfcfrmd daemon starting went wrong.
Please check the state of a system, and wait for the end of other processes, or increase swap space and reboot the node.
Failed to start the sfcfrmd daemon.
Obtain a core dumping of sfcfrmd daemon by gcore(1) in all the nodes (for example, "/usr/bin/ps -e | grep sfcfrmd" to get the process id and "/usr/bin/gcore process_id".) In addition to this, collect the diagnostic data with fjsnap and contact local Customer Support.
sfcfrmd daemon failed in reading and writing for the management partition (pathname.)
Please check whether a setup as a management partition domain of the GFS Shared File System is right, or a disk operates normally.
Since a cluster was not started, sfcfrmd daemon starting went wrong.
Please start a cluster.
The CIP address registered in the management partition changed.
By remaking and restoring the management partition, back the configuration of the partition to a previous one before a value of the CIP address changed. For details to remake and restore, see Section 20.5, " Backup of the management partition information." and Section 20.6, "Restoration of the management partition information."
The sfcfrmd daemon failed to connect the sfcprmd daemon.
Make sure that the sfcprmd daemon is running by ps(1) command (for example, "/usr/bin/ps -e | grep sfcprmd".) If the sfcprmd daemon was stopped by the unload dependency script, restart sfcprmd by the load script ("/opt/SMAW/SMAWcf/dep/start.d/S81sfcfs load".) Otherwise, re-start the node.
If this message appeared when sfcprmd daemon is running, collect the investigation material using the fjsnap command and contact local Customer Support.
The sfcfrmd daemon failed to connect the sfchnsd daemon.
Make sure that the sfchnsd daemon is running by ps(1) command (for example, "/usr/bin/ps -e | grep sfchnsd".) If the sfchnsd daemon was stopped by the unload dependency script, restart sfchnsd by the load script ("/opt/SMAW/SMAWcf/dep/start.d/S81sfcfs load".) Otherwise, re-start the node.
If this message appeared when sfchnsd daemon is running, collect the investigation material using the fjsnap command and contact local Customer Support.
sfchnsd daemon is not activated.
Obtain a core dumping of sfchnsd daemon by gcore(1) (for example, "/usr/bin/ps -e | grep sfchnsd" to get the process id and "/usr/bin/gcore process_id".) In addition to this, collect the diagnostic data with fjsnap and contact local Customer Support.
A service port is not able to be acquired.
Make sure that each /etc/services file has the entry "sfcfsrm 9200/tcp in all the nodes in the cluster. If not, add it. For details of this file, see the services(4) of the "Solaris X Reference Manual Collection."
The sfcfrmd daemon failed to start, because node configuration information was being added to the management partition.
After addition of the node configuration information is completed, start sfcfrmd daemon by sfcfrmstart(1M) command. To check whether addition of the node configuration information was completed, execute sfcsetup(1M) without any arguments, and check that information on the added node is displayed in the displayed contents.
The meta-data server could not be started.
Contact local Customer Support.
The primary meta-data server for the file system (mount_point) could not be started.
Contact local Customer Support.
The secondary meta-data server for the file system (mount_point) could not be started.
Contact local Customer Support.
The meta-data server for the file system (mount_point) switch processing failed.
Contact local Customer Support.
The primary meta-data server for the file system (mount_point) terminated abnormally.
Contact local Customer Support.
The secondary meta-data server for the file system (mount_point) terminated abnormally.
Contact local Customer Support.
An illegal operation was detected during processing in a file system (mount_point.)
Obtain a core dumping, and contact local Customer Support.
Cannot allocate memory in primary MDS for the file system (mount_point.) Memory of system is insufficient.
Increase the swap space or real memory.
Cannot allocate memory in secondary MDS for the file system (mount_point.) Memory of system is insufficient.
Increase the swap space or real memory.
Cannot allocate memory in taking over of MDS for the file system (mount_point.) Memory of system is insufficient.
Increase the swap space or real memory.
An inconsistency occurred during processing in the file system (mount_point.)
Obtain a core dumping, and contact local Customer Support.
The meta-data server for the file system (mount_point) log replay failed.
Contact local Customer Support.
An error was detected in a released i-node (inum) in a file system.
Obtain a core dump and contact local Customer Support.
An error detected in update log management of file system (mount_point.)
Obtain a core dump and contact local Customer Support.
An error was detected in the file system data reference count.
Obtain a core dump and contact local Customer Support.
The GFS Shared File System monitoring daemon ended abnormally. The error explained in detail occurred.
Collect the diagnostic data with fjsnap and then contact local Customer Support.
Starting up the primary MDS for the file system (mount_point) has failed.
Collect the diagnostic data with fjsnap and then contact local Customer Support.
Start up of the secondary MDS for the file system (mount_point) has failed.
Collect the diagnostic data with fjsnap and then contact local Customer Support.
Switchover of the MDS for the file system (mount_point) failed.
Collect the diagnostic data with fjsnap and then contact local Customer Support.
The primary MDS for the file system (mount_point) has failed.
Collect the diagnostic data with fjsnap and then contact local Customer Support.
The secondary MDS for the file system (mount_point) has failed.
Collect the diagnostic data with fjsnap and then contact local Customer Support.
Mounting on the primary MDS for the file system (mount_point) failed. The GFS Shared File System monitoring daemon terminates.
Collect the diagnostic data with fjsnap and then contact local Customer Support.
Mounting on the secondary MDS for the file system (mount_point) failed.
There is no problem with the operation of the file system. Collect the diagnostic data with fjsnap and then contact local Customer Support.
The file system (mountpoint) has been closed.
The target mount point cannot be used and only unmounting can be performed. Collect the diagnostic data with fjsnap and then contact local Customer Support.
This section explains the sfcfs_mount command messages that is called sfcfsd daemon.
sfcfs_mount is not performed by super user.
Do not execute the module directly.
Find setting error in the /etc/vfstab. Therefore, mount the file system will be failed.
If AC has outputted the message just before this message, apply the countermeasures described in that message.
Correct the entry in /etc/vfstab. Moreover, try mount again.
Some process is used mount_point or already mounted at mount_point. Therefore, mount the file system will be failed.
Please check the process that is used mountpoint by fuser(1M). Or check the file system that is used mount_point by mount(1M).
Find setting error in the /etc/vfstab. Therefore, mount the file system will be failed.
Correct the entry in /etc/vfstab. Moreover, try mount again.
block_special will be set write-protected Therefore, mount the file system will be failed.
Change block_special setting to read and write mode.
Files that have 2 gigabytes over size are included in the file system, when the mount option nolargefiles specified. Therefore, mount the file system will be failed.
Change block_special setting to read and write mode.
Internal error occurred. Therefore, mount the file system will be failed.
Please install same version level and same patch ID of GFS Shared File System in all nodes. If there are same, collect the diagnostic data with fjsnap and then contact local Customer Support.
Internal error occurred during communication with the meta-data server of mount_point. Therefore, mount the file system will be failed.
Please install same version level and same patch ID of GFS Shared File System in all nodes. If there are same, collect the diagnostic data with fjsnap and then contact local Customer Support.
Internal error found in response from meta-data server. Therefore, mount the file system will be failed.
Please install same version level and same patch ID of GFS Shared File System in all nodes. If there are same, collect the diagnostic data with fjsnap and then contact local Customer Support.
Access Client cannot connect to meta-data server. Therefore, mount the file system will be failed.
Check meta-data server down. In this case, try mount again.
Internal error occurred. Therefore, mount the file system will be failed.
Collect the diagnostic data with fjsnap and then contact local Customer Support.
The port number sfcfs-n in /etc/services is not found. Therefore, mount the file system will be failed.
Restore definition of sfcfs-n in /etc/services.
Internal error occurred. Therefore, mount the file system will be failed.
Collect the diagnostic data with fjsnap and then contact local Customer Support.
Cannot get the host address of host. Therefore, mount the file system will be failed.
Check host name (host) definition. Or change definition host name of the file system by sfcadm(1M).
Execution was attempted with other than super user permission.
Do not execute the module directly.
sfcpncd daemon failed booting.
Collect the diagnostic data with fjsnap and then contact local Customer Support.
Creation of the semaphore failed.
Collect the diagnostic data with fjsnap and then contact local Customer Support.
sfcpncd daemon terminated abnormally.
Collect the diagnostic data with fjsnap and then contact local Customer Support.
Execution was attempted with other than super user permission.
Do not execute the module directly.
sfcprmd daemon failed booting.
Collect the diagnostic data with fjsnap and then contact local Customer Support.
Opening /dev/null failed.
Check the system environment.
Accessing /var/opt/FJSVsfcfs/.sfcprmd_uds failed.
Collect the diagnostic data with fjsnap and then contact local Customer Support.
There is insufficient memory to allow sfcprmd daemon to boot.
Increase the swap space or the real memory.
The semaphore created by sfcpncd daemon cannot be accessed.
Collect the diagnostic data with fjsnap and then contact local Customer Support.
file_name is not executable.
See "PRIMECLUSTER (Solaris)Installation Guide." and reinstall the package
The status of the GDS volume that was specified as the management partition is not ACTIVE. Alternatively, when exiting GFS, the user may have deleted a GDS volume without first using sfcsetup -d to delete node configuration information from the management partition in the target node.
Check whether the volume exists by executing the following procedure:
# ls -l management_partition
Create a volume according to the instructions in "Operation" of the "PRIMECLUSTER Global Disk Services Configuration and Administration Guide."
Restore the management partition according to the instructions in "Backup up of the management partition information" in the "PRIMECLUSTER Global Disk Services Configuration and Administration Guide."
See Chapter 19, "Management Partition Operations (GUI) " and set up the management partition.
If the volume status is not ACTIVE, change the status to ACTIVE. For instructions on how to change the volume status, see the "Operation" of the "PRIMECLUSTER Global Disk Services Configuration and Administration Guide."
Check whether the volume exists by executing the following procedure:
# ls -l management_partition
Execute the following operations to release information on the deleted GDS volume:
Delete the node configuration information from the management partition by executing the following procedure:
For instructions on how to change the volume status, see "Operations" of the "PRIMECLUSTER Global Disk Services Configuration and Administration Guide."
sfcfrmd daemon failed booting.
Check the previous sfcfrmstart error message the was displayed on the console, then take action for that message according to the instructions in Appendix E.5.6, "sfcfrmstart command."
The daemon could not be stopped.
Collect the diagnostic data with fjsnap and contact local Customer Support.
Contents Index |