Top
PRIMECLUSTER Messages
FUJITSU Software

6.1.1 Information Messages

This section explains RMS information messages that are recorded in the switchlog file.

Confirm the component name of the displayed message, refer to the corresponding section from the table below. The message is explained in the message numerical order.

When the message and Content are the same, Content is omitted. In addition, when the action is unnecessary, Corrective action is omitted.

Component
name

Reference

ADC

"6.1.1.1 ADC: Admin configuration"

ADM

"6.1.1.2 ADM: Admin, command, and detector queues"

BAS

"6.1.1.3 BAS: Startup and configuration errors"

BM

"6.1.1.4 BM: Base monitor"

CML

"6.1.1.5 CML: Command line"

CTL

"6.1.1.6 CTL: Controllers"

CUP

"6.1.1.7 CUP: userApplication contracts"

DET

"6.1.1.8 DET: Detectors"

GEN

"6.1.1.9 GEN: Generic detector"

INI

"6.1.1.10 INI: init script"

MIS

"6.1.1.11 MIS: Miscellaneous"

SCR

"6.1.1.12 SCR: Scripts"

SHT

"6.1.1.13 SHT: Shutdown"

SWT

"6.1.1.14 SWT: Switch requests (hvswitch command)"

SYS

"6.1.1.15 SYS: SysNode objects"

UAP

"6.1.1.16 UAP: userApplication objects"

US

"6.1.1.17 US: us files"

WLT

"6.1.1.18 WLT: Wait list"

WRP

"6.1.1.19 WRP: Wrappers"

6.1.1.1 ADC: Admin configuration

(ADC, 6) Host <SysNode> with configuration <configfile> requested to join its cluster.

(ADC, 22) Attempting to clear the cluster Wait state for SysNode <sysnode> and reinitialize the Online state.

(ADC, 26) An out of sync modification request request1, request2 has been detected.

(ADC, 28) Dynamic modification finished.

(ADC, 29) Config file "CONFIG.rms" is absent or does not contain a valid entry, remaining in minimal configuration.

(ADC, 42) No remote host has provided configuration data within the interval specified by HV_WAIT_CONFIG. Running now the default configuration as specified in "CONFIG.rms"

(ADC, 50) hvdisp temporary file <filename> exceeded the size of <size> bytes, hvdisp process <pid> is restarted.

(ADC, 52) Waiting for application <userapplication> to finish its <request> before shutdown.

(ADC, 53) Waiting for application <app> to finish its switch to a remote host before shutdown.

(ADC, 54) Waiting for host <sysnode> to shut down.

(ADC, 55) No busy application found on this host before shutdown.

(ADC, 56) Waiting for busy or locked application <app> before shutdown.

(ADC, 66) Notified SF to recalculate its weights after dynamic modification.

(ADC, 71) Please check the bmlog for the list of environment variables on the local node.

6.1.1.2 ADM: Admin, command, and detector queues

(ADM, 35) Dynamic modification started with file <configfilename> from host <sysnode>.

(ADM, 92) Starting RMS now on all available cluster hosts

(ADM, 93) Ignoring cluster host <sysnode>, because it's in state: State

(ADM, 94) Starting RMS on host <SysNode> now!

(ADM, 101) Processing forced shutdown request for host SysNode. (request originator: RequestSysNode)

(ADM, 103) app: Shutdown in progress. AutoSwitchOver (ShutDown) attribute is set, invoking a switchover to next priority host

(ADM, 104) app: Shutdown in progress. AutoSwitchOver (ShutDown) attribute is set, but no other Online host is available. SwitchOver must be skipped!

(ADM, 108) Processing shutdown request for host SysNode. (request originator: RequestSysNode)

(ADM, 109) Processing shutdown request for all hosts. (request originator: SysNode)

(ADM, 112) local host is about to go down. CLI requests on this hosts are no longer possible

(ADM, 119) Processing hvdump command request for local host <sysnode>.

(ADM, 124) Processing forced shutdown request for all hosts. (request originator: SysNode)

(ADM, 127) debugging is on, watch your disk space in /var (notice #count)

Content:

Debug mode is on. Therefore, take care about the disk space of /var.

When debug mode is set to on in "hvutil -l" command, this message is displayed.

Corrective action:

Make sure that /var disk space is sufficient.

6.1.1.3 BAS: Startup and configuration errors

(BAS, 33) Resource <resource> previously received detector report "DetReportsOfflineFaulted", therefore the application <app> cannot be switched to this host.

6.1.1.4 BM: Base monitor

(BM, 9) Starting RMS monitor on host <sysnode>.

(BM, 27) Application <userapplication> does not transition to standby since it has one or more faulted or cluster exclusive online resources.

Content:

The application will skip standby processing because at least one of its resources is faulted or cluster exclusive online.

Corrective action:

Confirm the messages generated before or after this message, and take actions as necessary.

(BM, 34) RMS invokes hvmod in order to modify its minimal configuration with a startup configuration from the command line.

(BM, 35) RMS invokes hvmod in order to bring in a new host that is joining a cluster into its current configuration.

(BM, 36) RMS invokes hvmod in order to modify its minimal configuration with a configuration from a remote host while joining the cluster.

(BM, 37) RMS invokes hvmod in order to delete a host from its configuration while ejecting the host from the cluster.

(BM, 38) RMS invokes hvmod in order to bring a host from a cluster to which it is about to join.

(BM, 39) RMS invokes hvmod in order to run a default configuration.

(BM, 40) RMS starts the process of dynamic modification due to a request from hvmod utility.

(BM, 43) Package parameters for packagetype package <package> are <packageinfo>.

(BM, 44) Package parameters for <package1> / <package2> package not found.

Corrective action:

Install the package <package1> or <package2> if necessary.

(BM, 45) This RMS monitor has been started as <COMMAND>.

(BM, 47) RMS monitor has exited with the exit code <code >.

Corrective action:

Check the message printed immediately before and take the necessary action.

(BM, 48) RMS monitor has been normally shut down.

(BM, 50) RMS monitor is running in CF mode.

(BM, 55) The RMS-CF-CIP mapping in <configfile> for SysNode name <SysNode> has matched the CF name <cfname>.

(BM, 56) The RMS-CF-CIP mapping in <CONFIGFILENAME> for SysNode name <SYSNODE> has found the CF name to be <CFNAME> and the CIP name to be <CIPNAME>.

(BM, 57) The RMS-CF-CIP mapping in <configfile> for SysNode name <SysNode> has failed to find the CF name.

(BM, 60) The resulting configuration has been saved in <filename>, its checksum is <checksum>.

(BM, 61) A checksum verification request has arrived from host <sysnode>, that host's checksum is <xxxx>.

(BM, 62) The local checksum <xxxx> has been replied back to host <sysnode>.

(BM, 63) Host <sysnode> has replied the checksum <xxxx> equal to the local checksum. That host should become online now.

(BM, 64) Checksum request has been sent to host <hostname>.

(BM, 65) Package parameters for <package> package not found.

Corrective action:

Install the package <package> if necessary.

(BM, 84) The RMS-CF-CIP mapping in <configfilename> for SysNode name <sysnode> has found the CF name to be <cfname> and the CIP name to be <cipname>, previously defined as <olscfname>.

Corrective action:

Confirm that the CF name of the CIP configuration file contains no errors.

(BM, 87) The Process Id (pid) of this RMS Monitor is <PID>.

(BM, 91) Some scripts are still running. Waiting for them to finish before normal shutdown.

(BM, 100) Controlled application <app> is controlled by a scalable controller <controller>, but that application's AutoStartUp attribute is set to 1.

Corrective action:

Set 0 to the AutoStartUp attribute of userApplication <app>.

(BM, 102) Application <app> has a scalable controller, but that application has its AutoStartUp attribute set to 0.

(BM, 115) The base monitor on the local host has captured the lock.

(BM, 120) The RMS base monitor is locked in memory via mlockall().

(BM, 121) RMS monitor uses the <class> scheduling class for running scripts.

6.1.1.5 CML: Command line

(CML, 3) *** New Heartbeat_Miss_Time = time sec.

(CML, 16) Turn log off by user.

(CML, 22) Modify log level, bmLogLevel = "loglevel".

6.1.1.6 CTL: Controllers

(CTL, 3) Controller <controller> is requesting online application <app> on host <SysNode> to switch offline because more than one controlled applications are online.

(CTL, 4) Controller <controller> has its attribute AutoRecoverCleanup set to 1. Therefore, it will attempt to bring the faulted application offline before recovering it.

(CTL, 5) Controller <controller> has its attribute AutoRecoverCleanup set to 0. Therefore, it will not attempt to bring the faulted application offline before recovering it.

(CTL, 9) Controller <controller> has restored a valid combination of values for attributes <IgnoreOnlineRequest> and <OnlineScript>.

(CTL, 10) Controller <controller> has restored a valid combination of values for attributes <IgnoreOfflineRequest> and <OfflineScript>.

(CTL, 12) Controller <controller> has restored a valid combination of values for attributes <IgnoreStandbyRequest> and <OnlineScript>.

(CTL, 13) Controller <controller> does not propagate offline request to its controlled application(s) <app> because its attribute <IndependentSwitch> is set to 1.

(CTL, 14) Controller <controller> cannot autorecover application <app> because there is no online host capable of running this application.

(CTL, 15) Controller <controller> cannot autorecover application <app> because the host <SysNode> from the application's PriorityList is neither in Online, Offline, or Faulted state.

(CTL, 18) Scalable Controller <controller> from application <app1> cannot determine any online host where its controlled application(s) <app2> can perform the current request. This controller is going to fail now.

6.1.1.7 CUP: userApplication contracts

(CUP, 6) app Prio_list request not satisfied, trying again ...

6.1.1.8 DET: Detectors

(DET, 20) hvgdstartup file is empty.

Corrective action:

Describe necessary information when you use the hvgdstartup file. Use DetectorStartScript when you set a new RMS configuration file.

Hvgdstartup file might be unsupported in the release in the future.

(DET, 22) <resource>: received unexpected detector report "ReportedState" - ignoring it
Reason: Online processing in progress, detector report may result from an interim transition state.

(DET, 23) <resource>: received unexpected detector report "ReportedState" - ignoring it
Reason: Offline processing in progress, detector report may result from an interim transition state.

(DET, 25) <resource>: received unexpected detector report "ReportedState" - ignoring it
Reason: Standby processing in progress, detector report may result from an interim transition state.

(DET, 30) Resource <resource> previously received detector report "DetReportsOnlineWarn", the warning is cleared due to report "DetReportsOnline".

(DET, 32) Resource <resource> previously received detector report "DetReportsOfflineFaulted", the state is cleared due to report "report".

6.1.1.9 GEN: Generic detector

(GEN, 6) command ignores request for object object not known to that detector. Request will be repeated later.

Corrective action:

Confirm the messages generated before or after this message, and take actions as necessary.

6.1.1.10 INI: init script

(INI, 2) InitScript does not exist in hvenv.

Content:

RELIANT_INITSCRIPT of the RMS environment variable is not defined.

Corrective action:

Define RELIANT_INITSCRIPT of RMS environment variable when you use InitScript. When InitScript is not used, the action is unnecessary.

(INI, 3) InitScript does not exist.

Content:

The file defined by RELIANT_INITSCRIPT of the RMS environment variable does not exist.

Corrective action:

Store the file defined with RELIANT_INITSCRIPT of the RMS environment variable when you use InitScript. When InitScript is not used, the action is unnecessary.

(INI, 5) All system objects initialized.

(INI, 6) Using filename for the configuration file.

(INI, 8) Restart after un-graceful shutdown (e.g. host failure): A persistent fault entry will be created for all userApplications, which have PersistentFault attribute set

Content:

UserApplication that sets the PersistentFault attribute became Faulted state, because RMS did not end normally immediately before.

Corrective action:

Clear the Faulted state of userApplication as necessary.

(INI, 15) Running InitScript <InitScript>.

(INI, 16) InitScript completed.

6.1.1.11 MIS: Miscellaneous

(MIS, 10) The file filename can not be located during the cleanup of directory.

6.1.1.12 SCR: Scripts

(SCR, 3) The detector that died is detector_name.

Corrective action:

Confirm the messages generated before or after this message, and take actions as necessary.

(SCR, 6) REQUIRED PROCESS RESTARTED: detector_name restarted.

(SCR, 7) REQUIRED PROCESS NOT RESTARTED: detector_name is no longer needed by the configuration.

(SCR, 16) Resource <resource> WarningScript has completed successfully.

(SCR, 19) Failed to execute OfflineDoneScript with resource <resource>: errorreason.

Corrective action:

Search for a cause that OfflineDoneScript terminated abnormally, and take actions as necessary.

(SCR, 22) The detector <detector> with pid <pid> has been terminated. The time it has spent in the user and kernel space is <usertim> and <kerneltime> seconds respectively.

Corrective action:

Confirm the messages generated before or after this message, and take actions as necessary.

(SCR, 23) The script with pid <pid> has terminated. The time it has spent in the user and kernel space is <usertime> and <kerneltime> seconds respectively.

Corrective action:

Confirm the messages generated before or after this message, and take actions as necessary.

6.1.1.13 SHT: Shutdown

(SHT, 16) RMS on node SysNode has been shut down with command.

6.1.1.14 SWT: Switch requests (hvswitch command)

(SWT, 9) app: AutoStartAct(): object is already in stateOnline!

(SWT, 10) app: Switch request forwarded to a responsible host: SysNode.

(SWT, 15) app: Switch request forwarded to the node currently online: SysNode.

(SWT, 17) app: target host of switch request is already the currently active host, sending the online request now!

(SWT, 27) Cluster host <SysNode> is not yet online for application <app>.

(SWT, 29) HV_AUTOSTARTUP_IGNORE list of cluster hosts to ignore when autostarting is: SysNode.

(SWT, 38) Processing forced switch request for application app to node SysNode.

(SWT, 39) Processing normal switch request for application app to node SysNode.

(SWT, 40) Processing forced switch request for Application app.

(SWT, 41) Processing normal switch request for Application app.

(SWT, 48) A controller requested switchover for the application <object> is attempted although the host <onlinehost> where it used to be Online is unreachable. Caused by the use of the force flag the RMS secure mechanism has been overriden, switch request is processed. In case that host is in Wait state the switchover is delayed until that host becomes Online, Offline, or Faulted.

(SWT, 49) Application <app> will not be switched Online on host <oldhost> because that host is not Online. Instead, it will be switched Online on host <newhost>.

(SWT, 50) Application <app> is busy. Switchover initiated from a remote host <remotenode> is delayed on this local host <localnode> until a settled state is reached.

(SWT, 51) Application <app> is busy performing standby processing. Switchover initiated due to a shutdown of the remote host <remotenode> is delayed on this local host <localnode> until Standby processing finishes.

(SWT, 52) Application <app> is busy performing standby processing. Therefore, the contracting process and a decision for its AutoStartUp is delayed on this local host <localnode> until Standby processing finishes.

(SWT, 53) Application <app> is busy performing standby processing. The forced switch request is delayed on this local host <localnode> until Standby processing finishes.

(SWT, 61) Processing request to enter Maintenance Mode for application app.

(SWT, 62) Processing request to leave Maintenance Mode for application app.

(SWT, 63) Forwarding Maintenance Mode request for application app to the host SysNode, which is currently the responsible host for this userapplication.

(SWT, 64) Request to leave Maintenance Mode for application app discarded. Reason: Application is not in Maintenance Mode.

Corrective action:

Confirm that userApplication is in maintenance mode before ending the maintenance mode.

(SWT, 65) Processing request to leave Maintenance Mode for application app, which was forwarded from host SysNode. Nothing to do, application is not in Maintenance Mode.

Corrective action:

Confirm that userApplication is in maintenance mode before ending the maintenance mode.

(SWT, 66) Processing of Maintenance Mode request for application app is finished, transitioning into stateMaint now.

(SWT, 67) Processing of Maintenance Mode request for application app is finished, transitioning out of stateMaint now.

(SWT, 70) AutoStartUp for application<app> is invoked though not all neccessary cluster hosts are Online, because PartialCluster attribute is set.

(SWT, 71) Switch requests for application <app> are now permitted though not all neccessary cluster hosts are Online, because PartialCluster attribute is set.

(SWT, 73) Any AutoStart or AutoStandby for app is bypassed. Reason: userApplication is in Maintenance Mode

(SWT, 74) Maintenance Mode request for application app discarded. Reason: Application is busy or locked.

Corrective action:

Wait until the userApplication <app> becomes unbusy or unlocked, and then execute the request from the maintenance mode.

(SWT, 75) Maintenance Mode request for application app discarded. Reason: Application is Faulted.

Corrective action:

Clear the Faulted state of the userApplication <app> and then execute the maintenance mode request.

(SWT, 76) Maintenance Mode request for application app discarded. Reason: A controlled application is not ready to leave Maintenance Mode.

Corrective action:

Prepare to clear the maintenance mode of the userApplication that is controlled by the userApplication <app>, and then execute the maintenance mode request.

(SWT, 77) Maintenance Mode request for application app discarded. Reason: Application is controlled by another application and has "ControlledSwitch" attribute set.

Corrective action:

Execute the maintenance mode request again for the userApplication that controls the userApplication <app>.

(SWT, 78) Maintenance Mode request for application app discarded. Reason: Application has not yet finished its state initialisation.

Corrective action:

After the userApplication <app> is initialized, execute the maintenance mode request again.

(SWT, 79) Maintenance Mode request for application app discarded. Reason: Some resources are not in an appropriate state for safely returning into active mode. A "forceoff" request may be used to override this security feature.

Corrective action:

Restore the state of resources that belong to a userApplication <app> to the last state before the resources switched to the maintenance mode. After this process, execute the maintenance mode request again.

The maintenance mode can be forcibly cleared by using the forceoff option even when resources exist in inappropriate state.

(SWT, 80) Maintenance Mode request for application app discarded. Reason: Sysnode SysNode is in "Wait" state.

Corrective action:

Clear the Wait state of a node and then execute the maintenance mode request again.

(SWT, 82) The SysNode SysNode is seen as Online, but it is not yet being added to the priority list of any controlled or controlling userApplication because there is ongoing activity in one or more applications (eg. <app> on <SysNode>).

(SWT, 83) The SysNode SysNode is seen as Online, and now all userApplications have no ongoing activity - SysNode being added to priority lists.

(SWT, 85) The userApplication app is in state Inconsistent on host SysNode, the priority hvswitch request is being redirected there in order to clear the inconsistentcy.

(SWT, 86) The userApplication app is in state Inconsistent on host SysNode1, the hvswitch request to host SysNode2 is being redirected there in order to clear the inconsistency.

(SWT, 87) The userApplication app is in state Maintenance.  Switch request skipped.

Corrective action:

Clear the maintenance mode of a userApplication <app> and then execute the switch request again.

(SWT, 88) The following node(s) were successfully killed by the forced application switch operation: hosts

Corrective action:

Restart forcibly stopped nodes if needed.

For how to forcibly switch cluster applications, see "Notes on Switching a Cluster Application Forcibly" in "PRIMECLUSTER Installation and Administration Guide."

(SWT, 89) Processing forced switch request for resource resource to node sysnode.

(SWT, 90) Processing normal switch request for resource resource to node sysnode.

6.1.1.15 SYS: SysNode objects

(SYS, 2) This host has received a communication verification request from host <SysNode>. A reply is being sent back.

(SYS, 3) This host has received a communication verification reply from host <SysNode>.

(SYS, 5) This host is sending a communication verification request to host <SysNode>.

(SYS, 9) Attempting to shut down the cluster host SysNode by invoking a Shutdown Facility via (sdtool -k hostname).

(SYS, 12) Although host <hostname> has reported online, it does not respond with its checksum. That host is either not reachable, or does not have this host <localhost> in its configuration. Therefore, it will not be brought online.

Corrective action:

Check the RMS configuration file on all cluster nodes to verify if the same RMS configuration file is running on all the nodes.

(SYS, 51) Remote host <SysNode> replied correct checksum out of sync.

6.1.1.16 UAP: userApplication objects

(UAP, 10) app: received agreement to go online. Sending Request Online to the local child now.

(UAP, 13) appli AdminSwitch: application is expected to go online on local host, sending the online request now.

(UAP, 26) app received agreement to go online. Sending RequestOnline to the local child now.

(UAP, 31) app:AdminSwitch: passing responsibility for application to host <SysNode> now.

(UAP, 46) Request <request> to application <app> is ignored because this application is in state Unknown.

Corrective action:

After the userApplication <app> is initialized, execute the request <request> again.

6.1.1.17 US: us files

(US, 2) FAULT RECOVERY ATTEMPT: The object object has faulted and its AutoRecover attribute is set. Attempting to recover this resource by running its OnlineScript.

(US, 3) FAULT RECOVERY FAILED: Re-running the OnlineScript for object failed to bring the resource Online.

Corrective action:

Confirm the messages generated before or after this message, and take actions as necessary.

(US, 4) FAULT RECOVERY SUCCEEDED: Resource resource has been successfully recovered and returned to the Online state.

(US, 7) object: Transitioning into a Fault state caused by a persistent Fault info

(US, 8) Cluster host SysNode has been successfully status.

(US, 9) Cluster host SysNode has become online.

(US, 11) Temporary heartbeat failure disappeared. Now receiving heartbeats from cluster host hostname again.

(US, 12) Cluster host SysNode has become Faulted. A shut down request will be sent immediately!

(US, 13) Cluster host SysNode will now be shut down!

(US, 16) app: Online processing finished!

(US, 17) app: starting Online processing.

(US, 18) app: starting Offline processing.

(US, 19) app: starting Offline (Deact) processing.

(US, 20) app: Offline (Deact) processing finished!

(US, 21) app: Offline processing finished!

(US, 22) app: starting PreCheck.

(US, 24) app: Fault processing finished!

(US, 25) app: Collecting outstanding Faults ....

(US, 26) app: Fault processing finished!
Starting Offline processing.

(US, 27) app: precheck successful.

(US, 30) app: Offline processing after Fault finished!

(US, 32) FAULT RECOVERY SKIPPED! userApplication is already faulted. No fault recovery is possible for object object!

Corrective action:

Confirm the messages generated before or after this message, and take actions as necessary.

(US, 34) app: Request standby skipped -- application must be offline or standby for standby request to be honored.

Corrective action:

The Standby request should be sent to userApplications in Offline state or Standby state.

(US, 35) app: starting Standby processing.

(US, 36) app: Standby processing finished!

(US, 37) app: Standby processing skipped since this application has no standby capable resources.

(US, 40) app: Offline processing due to hvshut finished!

(US, 41) The userApplication <userapplication> has gone into the Online state after Standby processing.

(US, 44) resource: Fault propagation to parent ends here! Reason is either a MonitorOnly attribute of the child reporting the Fault or the "or" character of the current object

(US, 46) app: Processing of Clear Request finished. resuming Maintenance Mode.

(US, 56) The userApplication userapplication is already Online at RMS startup time. Invoking an Online request immediately in order to clean up possible inconsistencies in the state of the resources.

6.1.1.18 WLT: Wait list

(WLT, 2) Resource resource's ScriptType (script) has exceeded the ScriptTimeout of timeout seconds.

Corrective action:

Investigate why the script <script> times out and take action as necessary.

(WLT, 4) Object object's script has been killed since that object has been deleted.

(WLT, 7) Sending SIGNAL to script <script> (pid) now

6.1.1.19 WRP: Wrappers

(WRP, 19) RMS logging restarted on host <SysNode> due to a hvlogclean request.

(WRP, 20) This switchlog is being closed due to a hvlogclean request. RMS continues logging in a new switchlog that is going to be opened immediately. New detector logs are also going to be reopened right now.

(WRP, 21) A message cannot be sent into a Unix message queue from the process <pid>, <process>, after <number> attempts in the last <seconds> seconds. Still trying.

Corrective action:

Check the values of a system message queue tunables such as msgmnb, msgtql and others. If necessary, increase the values and reboot.

(WRP, 22) A message cannot be sent into a Unix message queue id <queueid> by the process <pid>, <process>.

Corrective action:

Check the values of a system message queue tunables such as msgmnb, msgtql and others. If necessary, increase the values and reboot.

(WRP, 26) Child process <cmd> with pid <pid> has been killed because it has exceeded its timeout period.

(WRP, 27) Child process <cmd> with pid <pid> will not be killed though it has exceeded its timeout period.

(WRP, 36) Time synchronization has been re-established between the local node and cluster host SysNode.

(WRP, 37) The package parameters of the package <package> on the remote host <hostname> are: Version = <version>, Load = <load>.

(WRP, 38) The Process Id (pid) and the startup time of the RMS monitor on the remote host <hostname> are <pid> and <startuptime>.

(WRP, 49) The base monitor on the local host is unable to the ping the echo port on the remote host SysNode.

(WRP, 50) The base monitor on the local host is able to ping the echo port on the remote host SysNode, but is unable to communicate with the base monitor on that host.

(WRP, 53) Current heartbeat mode is <mode>.

(WRP, 59) The cluster host <SysNode> does not support ELM heartbeat. ELM heartbeat does not start. Use UDP heartbeat only.

Corrective action:

Confirm the messages generated before or after this message, and take actions as necessary.

(WRP, 63) The ELM heartbeat started for the cluster host <SysNode>.

(WRP, 66) The elm heartbeat detects that the cluster host <SysNode> has become offline.