FUJITSU
索引
メッセージ目次
ERROR: Installation was failed.
ERROR: Please install the first CD-ROM at first.
ERROR: Please install the GUI packages of the first CD-ROM at first.
ERROR: This installation is running now.
ERROR: This software needs Solaris 8 or later.
ERROR: This software needs <sparc> architecture.
ERROR: To use this installer you will need to be the root user.
Warning: The package <FJSViomp> has not been installed.
Warning: The package <FJSVsnap> has not been installed.
Warning: The package <SMAWccbr> has not been installed.
/opt/FJSVclis/bin/i_os_setup:test:unknown operator 8
Allocated by another: 入力値
Cannot delete the directory.
ERROR: HOSTNAME : IP address not found in /etc/inet/hosts
ERROR: HOSTNAME : MAC address not found in /etc/ethers
FJSVclis: WARNING: <HOSTNAME>: rm_install_client not found.  filepath /export/install/<directory>/Solaris_X/Tools/rm_install_clinet  Cannot delete Solaris JumpStart definitions for <HOSTNAME>.  Please run rm_install_client from Solaris CD-ROM later.
INFO: The following selected products are the same.<製品名1><製品名2>Please return to the menu and select again.Hit enter key: <prompt>
Input error : 入力値
Input error: 入力値: "All" keyword exists.
Input error : 入力値 : Directory is not empty
Input error: 入力値: "free" keyword exists.
Input error: 入力値: "Overlap slice" can not be modified.
i_os_setup: ERROR: add_install_client command failed
i_os_setup: ERROR: check command failed
i_os_setup: ERROR: check command was not found.
i_os_setup: ERROR: HOSTNAME: IP address was not found in /etc/inet/hosts
i_os_setup: ERROR: Solaris CD image was not found
<NetworkInterfaceName> is not valid network interface line 6 position 19
panic -boot:Could not mount filesystem.Program terminated
Product CD image registration failed.
Product CD image registration failed.Deleting the failed directory...
RPC: Timed out.root directory:<Solaris格納ディレクトリ>/Tools/Boot mount server not responding
Share error : 入力値
The directory can not be registered again: 入力値
The selected product can not be installed with this installer program.
This installer program is running by other process.
ERROR: /tmp needs <TMP_LEAST> KB at least  
ERROR: </usr/sbin/dmidecode> command not found  
ERROR: /var needs <VAR_LEAST> KB at least  
ERROR: CF driver is loaded    
ERROR: failed: rpm *  
ERROR: Failed to install FJQSS<Information Collection Tool>
ERROR: internal error: *  
ERROR: no package of product <PROD> on CDx
ERROR: no package of product set <PSET> on CDx  
ERROR: platform <PLAT> not supported
ERROR: please install the first CD-ROM at first
ERROR: product <PROD> on platform <PLAT> not supported      
ERROR: product <PROD1> and <PROD2> contains the same package <PKG>  
ERROR: product set <PSET> on platform <PLAT> not supported  
ERROR: syntax error  
ERROR: syntax error ( <PSET> <PLAT> )
ERROR: the installation process is running now  
ERROR: to use this installer you will need to be the root user
INFO: no package to update
INFO: The installation process stopped by user request
Installation failed
Please see the following log file.  /var/log/install/cluster_install
Please see the following log file.  /var/log/install/cluster_install.x
ERROR: /tmp needs <TMP_LEAST> KB at least  
ERROR: /var needs <VAR_LEAST> KB at least  
ERROR: CF driver is loaded
ERROR: failed: rpm *
ERROR: internal error: *
ERROR: product <PROD> on platform <PLAT> not supported
ERROR: product set <PSET> on platform <PLAT> not supported
ERROR: syntax error
ERROR: syntax error ( <PSET> <PLAT> )
ERROR: the installation process is running now
ERROR: There exists GDS object(s)
ERROR: to use this uninstaller you will need to be the root user
INFO: no package to uninstall
INFO: The uninstallation process stopped by user request
Please see the following log file.   /var/log/install/cluster_uninstall
Please see the following log file.   /var/log/install/cluster_uninstall.x
Uninstallation failed.
2022 : The language Language is not available. Defaulting to English.languageは使用できません。デフォルト言語は英語です。
2023 : The file file has been replaced on node node.View has been restarted with the current file.ノードnodeのファイルfileが置き換えられています。表示は現在のファイルで再開します。
2593 :The unload operation was not successful:アンロードに失敗しました:
2752 :The following services are running on node:CFMnager.java: CFの停止ダイアログ
2757 :The following services are installed on node:CFManager.java: Services Dialogを起動します
2924 :Information:SF Wizard:Reconfig started on node.SF:nodeの再設定が開始しました。
3009 : Information: Reconnect to node node succeeded.ノードnodeへの再接続に成功しました。
3022 : Information: RMS not installed on host node.ホストnodeにRMSがインストールされていません。
3027 : Information: Reinitializing hvdisp connections to all hosts.全てのホストへのhvdispの接続を再初期化しています。
3039 :RMS is not running on any of the hosts.RMSが動作しているホストがありません。
3071 :removed session message.セッションmessageは削除されました。
3080 :RMS is not installed on node.nodeにRMSがインストールされていません。
3081 :Connecting to cluster nodes...Please Wait...クラスタノードに接続しています...お待ちください...
3100 :Information: Ignoring env node.envノードを無視します。
3101 :Information: Ignoring envL node.envLノードを無視します。
2012 : No matching entries found.該当するエントリがありません。
2019 : Exit Cluster Admin?Cluster Adminを終了しますか?
2584 :node failed to stop.  Do you wish to retry?nodeの停止に失敗しました。再試行しますか?
2589 :node failed to start.  Do you wish to retry?nodeの起動に失敗しました。再試行しますか?
2597 :Do you wish to invoke the Shutdown Facility Wizard to configure this cluster?SFウィザードを起動してクラスタ設定を続けますか?
2751 :This configuration uses unconnected interfaces. It is notpossible to verify the integrity of the configuration.Do you wish to continue?Findall.java: 未接続のインタフェースの設定を確認してください
2753 :Are you sure you want to mark node0 as down on node1?CFManager.java: ノードにDOWNマークを付ける確認
2754 :Are you sure you wish to remove node from CIM?CFManager.java: CIMの確認から削除します
2755 : Are you sure you wish to stop CF and all services on all nodes?全てのノードでCFと全てのサービスを停止します。よろしいですか?
2756 :Are you sure you wish to override CIM on node?CFManager.java: CIMオーバーライドの確認
2904 : Exit Shutdown Facility configuration wizard?SFウィザードを終了しますか?
2999 : ICMP shutdown agent cannot be selected because the cluster is not on the guest domain in Oracle VM Server for SPARC. Refer to the "PRIMECLUSTER Installation and Administration Guide" and select an appropriate shutdown agent.Oracle VM Server for SPARC環境のゲストドメインではないため、ICMPシャットダウンエージェントを選択できません。PRIMECLUSTER導入運用手引書を参照して、適切なシャットダウンエージェントを選択してください。
3011 : Information: RMS is not running on any of the hosts. It must be started.RMSが動作しているホストがありません。起動してください。
3035 :Shutdown failed.Click "msg" tab for details of the error returned from hvshut.Do you want to force a shutdown ?停止に失敗しました。hvshutから返されるエラーの詳細については [msg] タブをクリックしてください。強制的に停止しますか?
3045 :Are you sure you want to activate application application across the entire cluster ?Note that a separate Online request will be needed to actually start the application.クラスタ全体のアプリケーションapplicationを活性化します。よろしいですか?このアプリケーションを起動するには別途Online要求が必要になります。
3046 :Are you sure you want to deactivate application application across the entire cluster ?Node that an Activation request will be needed to bring the application out of its deactivated state.クラスタ全体のアプリケーションapplicationを非稼動にします。よろしいですか?非稼動状態のアプリケーションを稼動するには稼動要求が必要になります。
3047 :Are you sure you wish to attempt to clear all faults for application application on <node type> <node name>?<node type> <node name>上のアプリケーションapplicationの全てのFaultをクリアします。よろしいですか?
3049 :Are you sure you want to attempt to clear wait state for <node name> <node type> on all cluster hosts ?Note that this command assumes the cluster host has been manually "Killed",i.e, it has been shut down such that no cluster resources are online.If this command is executed without first having manually "killed" the luster host,data corruption may occur!全てのクラスタホスト上の<node name> <node type>のWait状態をクリアします。よろしいですか?このコマンドはクラスタホストが手動で停止していることを前提にしています。これは全てのクラスタリソースがOfflineの状態で停止されていることを意味します。クラスタホストを手動で停止せずにこのコマンドを実行すると、データが破壊される可能性があります。
3050 :Are you sure you want to attempt to clear wait state for <node name> <node type> on all cluster hosts ?Note that it would be done by returning the specified <node type> to online state.全てのクラスタホスト上の<node name> <node type>のWait状態をクリアします。よろしいですか?この場合、指定された<node type>はOnline状態に戻ります。
3051 :Are you sure you wish to force application application online on <node type> <node name> ?Warning: The forced switch option ignores potential error conditions. Used improperly, it can result in data corruption.<node type> <node name>上のアプリケーションapplicationを強制的にOnlineにします。よろしいですか?警告: 切替えオプションの強制実行により潜在的なエラー状態は無視されます。使い方を間違えるとデータが破損する可能性があります。
3052 :Are you sure you wish to take application application offline on host node and bring it online on <node type> <node name> ?ホストnode上のアプリケーションapplicationをOfflineにし、<node type> <node name>上でOnlineにします。よろしいですか?
3053 :Are you sure you wish to bring application application online on <node type> <node name> ?<node type> <node name>上のアプリケーションapplicationをOnlineにします。よろしいですか?
3054 :Are you sure you wish to bring application application online on the highest priority host?Note: If the application is already online on the highest priority host,this operation will not have any effect.最も優先順位の高いホスト上のアプリケーションapplicationをOnlineにします。よろしいですか?注意: アプリケーションがすでに最も優先順位の高いホスト上でOnlineの場合、この操作は無効です。
3055 :Are you sure you wish to start the RMS Configuration Monitor on <node type> <node name> ?<node type> <node name>上のRMS構成モニタを起動します。よろしいですか?
3056 :Are you sure you wish to bring application application to a standby state ?アプリケーションapplicationをStandby状態にします。よろしいですか?
3060 :Fatal Error internal: RMS.clone called with null pointer.RMS.cloneがnullポインタで呼び出されました。
3061 :Error: Remote connection failed, Exception: message.リモート接続に失敗しました。例外: message
3062 :Error: Unable to connect to host domain port.messageVerify node name, port number and if web server is running.host domain portに接続できません。message ノード名とポート番号を確認し、Webサーバが稼動していることを確かめてください。
3063 :Error: Unable to open reader for file fileException: exception.ファイルfileのリーダをオープンできません例外: exception
3064 :No open sessions.オープンされているセッションがありません。
3065 :Error: Session <rms session> not found.セッション<rms session>が見つかりません。
3066 :Missing rc: internal Error.rcが見つかりません: 内部エラー
3067 :rmsCluster.RT is not null.rmsCluster.RTがnull値ではありません。
3068 :Warning: Configuration has no graph, only <number of nodes> disjoint nodes.設定にグラフがありません。接続されていないノードは<number of nodes>だけです。
3069 :Warning: Unable to draw graph.グラフを描けません。
3083 :CRM is not installed on node.nodeにFJSVwvfrmパッケージがインストールされていません。
3138 : All SysNodes are online or coming up.アクションが不正です: 全てのSysNodeがOnlineまたは起動処理中になっています。
3141 :  Unable to get valid RMS or CF Node list.RMSまたはCFノードリストを取得することができません。
3146 :Are you sure you want to activate scalable application application across the entire cluster ?Note that a separate Online request will be needed to actually  start the application.クラスタ全体のスケーラブルアプリケーションapplicationを起動します。よろしいですか?このアプリケーションを起動するには他のOnline要求が必要になります。
3147 :Are you sure you want to deactivate scalable application application across the entire cluster ?Note that an Activation request will be needed to bring the application out of its deactivated state.クラスタ全体のスケーラブルアプリケーションapplicationを非稼動にします。よろしいですか?非稼動状態のスケーラブルアプリケーションを起動するには稼動要求が必要になります。
3148 :Are you sure you wish to attempt to clear all faults for scalable application application on <node type> <node name> ?<node type> <node name>上のスケーラブルアプリケーションapplicationの全てのFaultをクリアします。よろしいですか?
3151 :Are you sure you wish to take scalable application application offline on host node and bring it online on <node type> <node name> ?ホストnode上のスケーラブルアプリケーションapplicationをOfflineにし、<node type> <node name>上でOnlineにします。よろしいですか?
3152 :Are you sure you wish to bring scalable application application online on <node type> <node name> ?<node type> <node name>上のスケーラブルアプリケーションapplicationをOnlineにします。よろしいですか?
3153 :Are you sure you wish to bring scalable application application online on the highest priority host?Note: If the application is already online on the highest priority host, this operation will not have any effect.最も優先順位の高いホスト上のスケーラブルアプリケーションapplicationをOnlineにします。よろしいですか?注意: アプリケーションがすでに最も優先順位の高いホスト上でOnlineの場合、この操作は無効です。
3154 :Are you sure you wish to bring scalable application application to a standby state ?スケーラブルアプリケーションapplicationをStandby状態にします。よろしいですか?
3161 :Are you sure you wish to take application application out of maintenance mode?アプリケーションapplicationの保守モードを終了します。よろしいですか?
3163 :Are you sure you wish to take scalable application application out of maintenance mode?スケーラブルアプリケーションapplicationの保守モードを終了します。よろしいですか?
3164 :Are you sure you wish to take ALL the applications on the cluster out of maintenance mode?クラスタ内の全アプリケーションの保守モードを終了します。よろしいですか?
4100 : ICMP shutdown agent cannot be selected because I/O fencing function of GDS is not configured.Refer to the "PRIMECLUSTER Installation and Administration Guide" and configure I/O fencing function of GDS first.GDSのI/Oフェンシング機能が設定されていないため、ICMPシャットダウンエージェントを選択できません。PRIMECLUSTER導入運用手引書を参照して、先にGDSのI/Oフェンシング機能を設定してください。
4101 : For CF node node, the IP Address m and IP Address n are duplicate.Select a different IP Address or decrease the number of IP Address.CFノードnode のIPアドレスmとIPアドレスnが重複しています。異なるIPアドレスを選択するか、IPアドレス数を減らしてください。
4102 : The following IP address(es) in the configuration are displayed as blank because they are not assigned on the node.Select IP address(es) from the list of valid IP address(es) assigned on the node.       IP Address1(node1)       IP Address2(node2)                            :       IP Address n(node n)設定されている以下のIPアドレスは該当ノードに割り当てられていないため、空欄で表示します。該当ノードに割り当てられている有効なIPアドレスのリストから選択してください。       IP Address1(node1)       IP Address2(node2)                            :       IP Address n(node n)
4103 : IP address column "IP address n" for CF node node is not selected.You must specify IP address to check whether the CF node is alive.CFノード node のIPアドレス欄 "IPアドレス n" が選択されていません。ノードの生存確認に使用するIPアドレスを選択してください。
2905 :Please select at least one CF node to continue.1つ以上のCFノードを選択し、処理を続行してください。
2909 :Empty SF configuration found. Click "ok" to create a new configuration.SF設定がありません。新規設定を作成するには[確認]ボタンをクリックします。
2945 :Interface is being used by CF on node. Using the same interface for the Shutdown Facility may cause problems in split brain situations.Do you wish to continue using the same interface ?Interface はnode のCFに使用されています。シャットダウン機構と同じインタフェースを使用することにより、クラスタパーティション発生時に問題が起こる可能性があります。同じインタフェースを使用して続行しますか?
2949 :Following nodes are unreachable:node. Running the SF Wizard when some nodes are unreachable can result in incorrect node elimination later on. We strongly recommend you to exit the SF Wizard and do the configuration at a later time when all the nodes are up and reachable. Do you want to exit the SF Wizard?以下のノードと通信できません:node 通信できないノードがある時にSFウィザードを実行すると、不適切なノード強制停止設定が行なわれる可能性があります。一旦SFウィザードを終了して、全てのノードが稼働中で通信可能になるのを待ってから設定を行なってください。SFウィザードを終了して良いですか?
2953 : Timeout for the agent Shutdown Agent is timeout which is different from the default timeout :timeout for this Shutdown Agent.The timeout value should be 20 if number of hosts is less than or equal to 4.If number of hosts are more than 4 the timeout value should be (6 x no. of nodes ) + 2.Do you want to set the default timeout value ?agent シャットダウンエージェントのタイムアウト値は timeout 秒ですが、このシャットダウンエージェントのデフォルトのタイムアウト値はtimeout 秒です。タイムアウト値は、ホスト数が4以下のとき20秒、4より多いとき(6×ホスト数)+2秒であるべきです。タイムアウト値をデフォルトのタイムアウト値にしますか?
2954 :Unable to get status of shutdown agent on the following nodes:node Check the hardware/software configuration for the shutdown agent. Running the SF Wizard now can result in incorrect configuration. We strongly recommend you to exit the SF Wizard and do the configuration after correcting the shutdown agent setup. Do you want to exit the SF Wizard?次のノードのシャットダウンエージェントの状態情報が取得できません:node シャットダウンエージェントのハードウェアおよびソフトウェアの構成を確認してください。この状態でSFウィザードを実行すると、構成設定が正しく行われない可能性があります。いったんSFウィザードを終了し、シャットダウンエージェントの設定を修正してから構成を実行することをお奨めします。SFウィザードを終了しますか?
2958 :By choosing "Use Defaults", you will reset the previously configured username and passwords.  Are you sure you want to use the default settings?[デフォルトを使用]チェックボックスをオンにすると、ユーザ名とパスワードがデフォルト値に設定されます。よろしいですか?
3007 : Warning: Reconnect to node failed, trying again after time sec...警告: nodeへの再接続に失敗しました。time 秒後に再試行します...
3008 : Warning: Lost data connection for node node. Attempting to reconnect...警告: ノードnodeのデータ接続が解除されました。再接続しています...
3014 : Warning: Ignoring remote data:警告: リモートデータを無視します:
3015 : Warning: Interrupt while reading data, line :警告: データの読込み中に割り込みが発生しました。行 :
3017 : Warning: RMS node node already marked as localhost.警告: RMSノードnodeにはローカルホストのマークが付いています。
3019 : Warning: node node not found.警告: ノードnodeが見つかりません。
3036:Shut down RMS on the local node without shutting down running applications. All RMS nodes are notified that a controlled shutdown is under way on the local node. This may break the consistency of the cluster. No further action may be executed by RMS until the cluster consistency is re-established. This re-establishment includes restart of RMS on the shutdown host.Do you want to continue ?実行中のアプリケーションを停止せずにローカルノードのRMSを停止します。全てのRMSノードにローカルノードで停止の制御を実行中であることが通知されます。これによりクラスタの整合性が失われる可能性があります。クラスタの整合性が回復するまでRMSの処理は続行されません。整合性の回復には停止されたホスト上のRMSの再起動も含まれます。続行しますか?
3037:This option forces the configuration monitor to clean up and shut down RMS on the local system without performing offline processing. This may break the consistency of the cluster.  No further action may be executed by RMS until the cluster consistency is re-established. This re-establishment includes restart of RMS on the shutdown host. Do you want to continue ?このオプションにより構成モニタはOffline処理をせずにローカルシステムのRMSを強制的にクリーンアップおよび停止します。これによりクラスタの整合性が失われる可能性があります。クラスタの整合性が回復するまでRMSの処理は続行されません。整合性の回復には停止されたホスト上のRMSの再起動も含まれます。続行しますか?
3038:This option shuts down RMS on all nodes along with the user applications.An attempt will be made to bring all online user applications offline.Do you want to continue ?このオプションを選択すると、全てのノードのRMSおよびユーザアプリケーションが停止されます。稼動中の全てのユーザアプリケーションがOfflineになります。続行しますか?
3085:RMS cannot be started.  Please start CF first.RMSを起動できません。まずCFを起動してください。
3094:Maximum number of post cards open should not be more than five. Please close some of the post cards.同時に開くことができるポップアップウィンドウは5枚です。ポップアップウィンドウを閉じてください。
3120:Lost connection to gateway host. Status of RMS is unknown.  Press the Retry button to try and reconnect to the host.ゲートウェイホストへの接続が解除されました。RMSの状態が不明です。ホストに再接続するには再試行ボタンを押してください。
3149:Are you sure you wish to bring scalable application userapplication offline ? Note that it would be brought offline without initiating a switchover or shutting down RMS.スケーラブルアプリケーションuserapplicationをOfflineにします。よろしいですか?OfflineにしてもRMSの切替えまたは停止は開始されません。
3150:Are you sure you wish to force scalable application userapplication online on node type node name?Warning: The forced switch option ignores potential error conditions. Used improperly, it can result in data corruption.node type node name 上のスケーラブルアプリケーションuserapplicationを強制的にOnlineにします。よろしいですか?警告: 切替えオプションの強制実行により潜在的なエラー状態は無視されます。使い方を間違えるとデータが破損する可能性があります。
3155:This option forces the configuration monitor to clean up and shut down RMS on all systems  without performing offline processing.  This may break the consistency of the cluster. No further action may be executed by RMS until the cluster consistency is re-established. Do you want to continue ?このオプションにより構成モニタはOffline処理をせずに全システムのRMSを強制的にクリーンアップおよび停止します。これによりクラスタの整合性が失われる可能性があります。クラスタの整合性が回復するまでRMSの処理は実行されません。続行しますか?
3160:Are you sure you wish to take application userapplication into maintenance mode?Warning: RMS monitors applications in maintenance mode, but does not take any corrective actions if the application resources fail.アプリケーションuserapplicationを保守モードに変更します。よろしいですか?警告: RMSはアプリケーションが保守モードの場合も監視を継続しますが、アプリケーションのリソースに障害が生じても修正処置は一切行いません。
3162:Are you sure you wish to take scalable application userapplication into maintenance mode? Warning: RMS monitors applications in maintenance mode, but does not take any corrective actions if the application resources fail.スケーラブルアプリケーションuserapplicationを保守モードに変更します。よろしいですか?警告: RMSはアプリケーションが保守モードの場合も監視を継続しますが、アプリケーションのリソースに障害が生じても修正処置は一切行いません。
3165:Are you sure you wish to take ALL the applications on the cluster into maintenance mode? Warning: RMS monitors applications in maintenance mode, but does not take any corrective actions if the application resources fail.クラスタ内の全アプリケーションを保守モードに変更します。よろしいですか?警告: RMSはアプリケーションが保守モードの場合も監視を継続しますが、アプリケーションのリソースに障害が生じても修正処置は一切行いません。
2000 : Error getting nodes or no active nodes to manage.ノードの取得にエラーが発生したか、管理する稼動中ノードがありません。
2001: Error in loading image: image.イメージのロード中にエラーが発生しました: image
2002: Timeout checking installed packages.インストール済みパッケージのチェックがタイムアウトしました。
2004 : Connection to port port on host node failed: message.Please Verify node name, port number, and that the web  server is running.ホスト node のポート port への接続に失敗しました: messageノード名、ポート番号、Webサーバが稼働していることを確認してください。
2010 : No node object for: node.node のノードオブジェクトがありません。
2011 : Unknown data stream.不明なデータストリーム
2013 : Finished searching the document.ドキュメントの検索が終了しました。
2014 : File not found.ファイルがありません。
2016 : Invalid time range.時間の範囲が無効です。
2017 : Unknown Message Identifier in resource file:リソースファイルに不明なメッセージ識別子があります:
2018 : Illegal arguments for Message Identifier:メッセージ識別子の引数が不正です:
2020 : Start time is invalid.開始時刻が無効です。
2021 : End time is invalid.終了時刻が無効です。
2501 :There was an error loading the driver:ドライバのロード中にエラーが発生しました。
2502 :There was an error unloading the driver:ドライバのアンロード中にエラーが発生しました。
2504 :There was an error unconfiguring CF:CFの設定を削除中にエラーが発生しました
2505 :There was an error communicating with the back end:バックエンドとの通信中にエラーが発生しました
2506 :There are no nodes in a state that can be stopped.停止できる状態のノードがありません
2507 :Error starting CF on node:nodeのCFを起動中にエラーが発生しました
2508 :Error listing services running on node:nodeで動作中のサービスを表示中にエラーが発生しました。
2510 :Error stopping CF on node:nodeのCFを停止中にエラーが発生しました。
2511 :Error stopping service on node:nodeのserviceを停止中にエラーが発生しました。
2512 :Error clearing statistics on node:nodeの統計をクリア中にエラーが発生しました。
2513 :Error marking down node:nodeにDOWNマークを付ける際にエラーが発生しました。
2514 :To start CF on the local node, click on the appropriate button in the left hand panel.To start CF on a remote node, CF must be running on the local node.ローカルノードのCFを起動するには、左側パネルの該当ボタンをクリックしてください。リモートノードのCFを起動するには、ローカルノードのCFが動作している必要があります
2515 :To unconfigure CF on the local node, click on the appropriate button in the left hand panel.To unconfigure CF on a remote node, CF must be running on the local node.ローカルノードのCFの設定を削除するには、左側パネルの該当ボタンをクリックしてください。リモートノードのCF設定を解除するには、ローカルノードのCFが動作している必要があります。
2516 :CF is not running on the local node, and cannot be stopped.To stop CF on a remote node, CF must be running on the local node.ローカルノードのCFが動作していないため停止できません。リモートノードのCFを停止させるには、ローカルノードのCFが動作している必要があります。
2517 :In order to mark nodes as DOWN, CF must be running on the local node.ノードにDOWNマークを付けるには、ローカルノードのCFが動作している必要があります。
2518 :In order to display network topology, CF must be running on the local node.ネットワークトポロジを表示するには、ローカルノードのCFが動作している必要があります。
2519 :In order to display any statistics, CF must be running on the local node.統計を表示するには、ローカルノードのCFが動作している必要があります。
2520 :In order to clear statistics, CF must be running on the local node.統計をクリアするには、ローカルノードのCFが動作している必要があります。
2521 :There are no nodes in a state where statistics can be displayed.統計を表示できる状態のノードがありません。
2522 :There are no nodes in a state where messages can be displayed.メッセージを表示できる状態のノードがありません。
2523 :There are no nodes in a state where they can be started.起動できる状態のノードがありません。
2524 :There are no nodes in a state where they can be unconfigured.設定を削除できる状態のノードがありません。
2526 :Error scanning for clusters:クラスタのスキャン中にエラーが発生しました。
2528 :Please select a cluster to join.参入させるクラスタを選択してください。
2529 :The specified cluster name is already in use.指定されたクラスタ名はすでに使用されています。
2532 :Probing some nodes failed.  See status window for details.一部のノードの検索に失敗しました。詳細は状態ウィンドウを参照してください。
2533 :Some nodes failed CIP configuration.一部のノードのCIP設定に失敗しました。
2534 :Insufficient IPs for net net available in /etc/hosts. There are not enough unassigned IPs in /etc/hosts on the cluster nodes.  Please remove any unneeded addresses for this subnet from /etc/hosts, or use a different subnet./etc/hostsに、ネットnet用のIPが十分ありません。クラスタノードの/etc/hostsで、未割り当てのIPが足りません。このサブネットで現在使用していないアドレスを/etc/hostsから削除するか、別のサブネットを使用してください。
2535 :Missing node suffix for net net.ネットnetのノードサフィックスが入力されていません。
2536 :The node suffix for net net is too long.ネットnetのノードサフィックスが長すぎます。
2537 :Invalid node suffix for net net. Node names may only contain letters, numbers, and - characters.ネットnetのノードサフィックスが無効です。ノード名に使用できるのは英数字と'-'キャラクタのみです。
2538 :Invalid subnet mask for net net.  The subnet mask must be in the form of 4 numbers 0-255 separated by dots. Also, when written in binary, it must have all 1s before 0s.ネットnetのサブネットマスクが無効です。サブネットマスクは0から255の数字をドット区切りで4つ並べた形式で指定します。2進法で表記した場合に連続した1のビットに続いて連続した0のビットとなる必要があります。
2539 :Invalid subnet number for net net. The subnet number must be in the form of 4 numbers 0-255 separated by dots.ネットnetのサブネット番号が無効です。サブネット番号は0から255の数字をドット区切りで4つ並べた形式で指定します。
2540 :net net is too small.The cluster has number of nodes nodes. Only number of nodes possible host ids is supported by the IP subnet and netmask given for net net. Please use a subnet and netmask combination that has more host ids.ネットnet内のアドレスが足りません。クラスタにはノード数ノードあります。ネットnetに割り当てられたIPサブネットおよびネットマスクネットでサポートされたホストidはノード数のみです。より多くのホストidを持つサブネットとネットマスクの組合せを使用してください。
2541 :The IP range ip1/netmask1 overlaps with ip2/netmask2, which is in use on node.ip1/netmask1とnode上で活性されているip2/netmask2のネットワークが重複しています。
2542 :The IP ranges for net net1 and net net2 overlap.ネットnet1とネットnet2のIP範囲が重複しています。
2543 :The subnet subnet has no nodes on it.subnetサブネットにノードが含まれていません。
2544 :There are no nodes in a LEFTCLUSTER state.DOWNマークを付けることができるLEFTCLUSTER状態のノードがありません。
2549 :Error adding node to CIM:CIMにnodeを追加中にエラーが発生しました:
2550 :Error removing node from CIM:CIMからnodeを削除中にエラーが発生しました:
2551 :In order to add nodes to CIM, CF must be running on the local node.CIMにノードを追加するには、ローカルノードのCFが動作している必要があります。
2552 :In order to remove nodes from CIM, CF must be running on the local node.CIMからノードを削除するには、ローカルノードのCFが動作している必要があります。
2553 :There are no nodes in a state where they can be added to CIM.CIMに追加できる状態のノードがありません。
2554 :There are no nodes in a state where they can be removed from CIM.CIMから削除できる状態のノードがありません。
2556 :CIM Configuration failed.CIM設定に失敗しました。
2557 :Please select a node to add.追加するノードを選択してください。
2558 :Please select a node to remove.削除するノードを選択してください。
2559 :Nodes already in the cluster cannot be removed.クラスタに参入しているノードは削除できません。
2560 :The local node cannot be removed.ローカルノードは削除できません。
2561 :Some nodes were not stopped.  See status window for details.一部のノードが停止していません。詳細は状態ウィンドウを参照してください。
2562 :Some nodes failed CF configuration.一部のノードのCF設定に失敗しました。
2563 :Error adding CIM override on node:node上でCIMオーバーライドを追加中にエラーが発生しました:
2564 :Error removing CIM override on node:node上でCIMオーバーライドを削除中にエラーが発生しました:
2565 :In order to override CIM, CF must be running on the local node.CIMをオーバーライドするには、ローカルノードのCFが動作している必要があります。
2566 :In order to remove CIM override, CF must be running on the local node.CIMオーバーライドを削除するには、ローカルノードのCFが動作している必要があります。
2567 :There are no nodes in a state where they can be overridden.オーバーライドできる状態のノードがありません。
2568 :There are no nodes in a state that can have a CIM override removed.CIMオーバーライドを削除できる状態のノードがありません。
2578 :For node, the IP address for interconnect interconnect_name and interconnect interconnect_name are the same.nodeのインタコネクト interconnect_nameとインタコネクトinterconnect_nameのIPアドレスが同じです。
2579 :The address for node on interconnect interconnect_name is missing.インタコネクト interconnect_nameのnodeのアドレスが選択されていません。
2580 :The IP address and broadcast address for node on interconnect interconnect_name are not consistent with each other.インタコネクト interconnect_nameのnodeのIPアドレスとブロードキャストアドレスに不整合があります。
2582 :In order to check heartbeats, CF must be running on the local node.ハートビートを確認するには、ローカルノード上でCFが稼動している必要があります。
2583 :For interconnect interconnect_name, the IP address for node is not on the same subnet as the IP address for node.インタコネクトinterconnect_nameのnodeのIPアドレスとnodeのIPアドレスのサブネットが異なります。
2585 :On interconnect interconnect_name, the IP address for node node and node node are the same.インタコネクトinterconnect_nameのノードnodeとノードnodeのIPアドレスが同じです。
2586 :Invalid CF node name for node.  Lowercase a-z, 0-9, _ and - are allowed. nodeのCFノード名が無効です。使用可能な文字は英小文字、数字、_ 、-です。
2587 :The CF node name for node1 and node2 are the same.node1とnode2のCFノード名が同じです。
2588 :The CF node name for node is empty.nodeのCFノード名が空です。
2590 :Invalid cluster name.  The cluster name may contain letters, numbers, dashes, and underscores.クラスタ名が無効です。クラスタ名に使用可能な文字は英数字、-、_です。
2591 :The CF node name for node1 is the same as the public name of node2.node1のCFノード名がnode2のノード名と同じです。
2594 :CF is not running on the local node. To check CF for unload, CF must be running on the local node.ローカルノードのCFが動作していません。CFのアンロードを確認するには、ローカルノードのCFが動作している必要があります。
2595 :There are no nodes in a state where the unload status can be checked.アンロード状態を確認できる状態のノードがありません。
2600 :For node, the interfaces for interconnect interconnect_name and interconnect interconnect_name are on the same subnet.node のインタコネクトinterconnect_name とインタコネクトinterconnect_name のネットワークインタフェースが同一サブネット上に存在します。
2921 :Internal Error:SF Wizard:Unable to run command on node.内部エラー:SFウィザード: node で command を実行できません。
2922 :Internal Error:SF Wizard:Error reading file file from node.内部エラー:SFウィザード: node からファイル file の読込み中にエラーが発生しました。
2923 :Internal Error:SF Wizard:Reading file :Ignoring Unknown data:内部エラー:SFウィザード: file の読込み中:不明なデータは無視します:
2925 :Internal Error:SF Wizard:Unknown data: SA_xcsf.内部エラー:SFウィザード:不明なデータ: SA_xcsf
2926 :Passwords do not match. Retype.パスワードが一致しません。再入力してください。
2939 :Internal Error:SF Wizard:Empty data, not writing to file file on node.内部エラー:SFウィザード:空のデータ: node のファイル file に書込まれていません。
2940 :Internal Error:SF Wizard:Error writing to file file on node.内部エラー:SFウィザード: node のファイル file に書込み中にエラーが発生しました。
2941 :You must enter weight for each of the CF nodes.各CFノードの重みを入力してください。
2942 :Invalid CF node weight entered.入力されたCFノードの重みが無効です。
2943 :You must enter admin IP for each of the CF nodes.各CFノードの管理LAN IPアドレスを入力してください。
2944 :CF node weight must be between 1 and 300.CFノードの重みは1から300の範囲で値を入力してください。
2946 :You must select at least one agent to continue.続行するには、1つ以上のエージェントを選択する必要があります。
2946 :You must select one agent to continue.  (Solaris 版 4.3A10 以後)続行するには、シャットダウンエージェントを1つ選択してください。
2947 :Timeout value must be an integer greater that zero and less than 3600.タイムアウト値は1以上3600未満の整数を入力してください。
2948 :Shutdown Facility reconfiguration failed on node.nodeでシャットダウン機構の再設定に失敗しました。
2950 :You must specify XSCF-Name and User-Name for each of the CF nodes.各CFノードのXSCF名とユーザ名を入力してください。
2952 :You must specify RCCU-Name for each of the CF nodes.各CFノードのRCCU名を入力してください。
2959 :Timeout value is out of range.タイムアウト値が設定範囲外です。
2960 :You must specify a MMB User-Name for each of the hosts.それぞれのホストに対してMMBのユーザ名を設定しなければなりません。
2961 :The MMB User-Name must be between 8 and 16 characters long.MMBのユーザ名は8文字から16文字の間にしなければなりません。
2962 :The MMB Password must be between 8 and 16 characters long.MMBのパスワードは8文字から16文字の間にしなければなりません。
2963 :The MMB Panic Agent must have higher precedence than the MMB Reset Agent.MMB Panic エージェントはMMB Reset エージェントより優先度を高くしなければなりません。
2967 :You must specify ILOM-name and User-Name for each of the CF nodes.各CFノードのILOM名とユーザ名を入力してください。
2968 :You must specify ALOM-name and User-Name for each of the CF nodes.各CFノードのALOM名とユーザ名を入力してください。
2969 :You must specify a unique hostname or IP address for XSCF Name1 and XSCF Name2.XSCF名1とXSCF名2に異なるホスト名またはIPアドレスを入力してください。
2970 :You must specify XSCF Password for each of the CF nodes.各CFノードのXSCFパスワードを入力してください。
2971 :You must specify ILOM Password for each of the CF nodes.各CFノードのILOMパスワードを入力してください。
2972 :You must specify ALOM Password for each of the CF nodes.各CFノードのALOMパスワードを入力してください。
2973 :Invalid network prefix for net{0}.  The subnet number must be in the form of hexadecimal numbers separated by colons.ネット{0}のネットワークプレフィックスが無効です。ネットワークプレフィックスは、コロンで区切った数値列を16進数で表記する形式で指定します。
2974 :net {0} does not have enough address space.The cluster has {2} nodes.  Only {1} possible host ids are supported by the network prefix given for net {0}.  Please use network prefix andprefix-length combination that has more host ids.ネット{0}内のアドレスが足りません。クラスタには{2}ノードあります。ネット{0}に割り当てられたネットワークプレフィックスでサポートされたホストidは{1}のみです。より多くのホストidを持つネットワークプレフィックスとプレフィックス長の組合せを使用してください。
2975 :Invalid prefix-length for net{0}.  The prefix-length must be specified in the range of from 64 to 128.ネット{0}のプレフィックス長が無効です。プレフィックス長は64から128の範囲内で指定します。
2976 :The nodesuffix of net{0} and net{1} overlaps.ネット{0}とネット{1}のノードサフィックスが重複しています。
2977 :"RMS" was entered to the nodesuffix of net{0}.If you want to use "RMS" to the nodesuffix, select "For RMS".ネット{0}のノードサフィックスにRMSが入力されています。ノードサフィックスをRMSにしたい場合は、「RMSを使用する」を選択してください。
2978 :The first character of CF node name node is not a lower case letter.RMS Wizard Tools cannot operate by this CF node name.Please use a lower case letter to the first character of CF node name.CFノード名 node は、文字列の先頭が英小文字以外となっています。この設定では RMS Wizard Tools が動作できません。先頭が英小文字の名前を設定してください。
2980 :Failed to get domain information of OVM on the CF node <CF nodename>.Please confirm the OS's release is "Oracle Solaris 11" or higher than "Oracle Solaris 10 9/10"and that the domain type is control domain or guest domain.CFノード<CFノード名>において、OVMのドメイン情報が取得できませんでした。OSがOracle Solaris 11または、Oracle Solaris 10 9/10 以降のリリースであること、ドメインの種類が制御ドメインまたは、ゲストドメインであることを確認してください。
2981 :PPAR-ID of the CF node <CF nodename> is invalid.Please input the numerical value within the range of 0-15 to PPAR-ID.CFノード<CFノード名>のPPAR-IDが無効です。PPAR-IDには0~15の範囲の数値を入力してください。
2982 :The domain name of the CF node <CF nodename> is invalid.Please enter correct name to domain name.CFノード<CFノード名>のドメイン名が無効です。正しいドメイン名を入力してください。
2983 :You must specify XSCF-Name1 of the CF node <CF nodename>.CFノード<CFノード名>のXSCF名1を入力してください。
2984 :You must specify XSCF-Name2 of the CF node <CF nodename>.CFノード<CFノード名>のXSCF名2を入力してください。
2985 :You must specify user of the CF node <CF nodename>.CFノード<CFノード名>のユーザ名を入力してください。
2986 : You must specify Password of the CF node <CF nodename>.CFノード<CFノード名>のパスワードを入力してください。
2987 :You must specify a unique hostname or IP address for XSCF Name1 and XSCF Name2 of the CF node <CF nodename>.CFノード<CFノード名>のXSCF名1とXSCF名2に異なるホスト名または、IPアドレスを入力してください。
2988 :Passwords of the CF node <CF nodename> do not match. Please retry.CFノード<CFノード名>のパスワードが一致しません。再入力してください。
2991 :You must specify Unit Name of the CF node <CF nodename> in the line <line number>.CFノード<CFノード名>の<行番号>行目のユニット名を入力してください。
2992 :Outlet of the CF node <CF nodename> in the line <line number> is invalid.Please input the numerical value within the range of 1-8 to Outlet.CFノード<CFノード名>の<行番号>行目のコンセント番号が不正です。コンセント番号には1~8の範囲の数値を入力してください。
2993 :You must specify zonename of the CF node <CF nodename>.CFノード<CFノード名>のゾーン名を入力してください。
2994 :You must specify globalzone hostname of the CF node <CF nodename>.CFノード<CFノード名>のグローバルゾーンホスト名を入力してください。
2995 :You must specify ILOM-name of the CF node <CF nodename>.CFノード<CFノード名>のILOM名を入力してください。
2996 :You must specify User-Name of the CF node <CF nodename> in the line <line number>.CFノード<CFノード名>の<行番号>行目のユーザ名を入力してください。
2997 :You must specify Password of the CF node <CF nodename> in the line <line number>.CFノード<CFノード名>の<行番号>行目のパスワードを入力してください。
2998 :Passwords of the CF node <CF nodename> in the line <line number> do not match. Please retry.CFノード<CFノード名>の<行番号>行目のパスワードが一致しません。再入力してください。
3000 : Fatal Error processor internal:RMS session is null.プロセッサ内部の致命的エラー:RMSセッションがnull値です。
3001 : Fatal Error processor internal:RMS session graph is null.プロセッサ内部の致命的エラー:RMSセッショングラフがnull値です。
3002 : Fatal Error processor internal:Not initialized as local or remote.プロセッサ内部の致命的エラー:ローカルまたはリモートで初期化されていません。
3003 : Error: Unable to get remote stream reader.エラー: リモートストリームリーダを取得できません。
3004 : Error: Unable to obtain remote data stream : message.エラー: リモートデータストリームを取得できません : message
3005 : Error: new Thread() failed for "" node."" node のnew Thread()に失敗しました。
3006 : Error: Reconnect failed.  Data displayed for node will no longer be current.エラー: 再接続に失敗しました。表示されている node のデータは最新ではありません。
3010 : Error: Exception while closing data reader: message.エラー: データリーダのクローズ中に例外が発生しました: message
3012 : Error: RMS and GUI are not compatible. Use newer version of RMS GUI.エラー: RMSとGUIに互換性がありません。新しいバージョンのRMS GUIをご使用ください。
3013 : Error: Missing local host indication from node.エラー: node からのローカルホスト指示が不明です。
3016 : Error INTERNAL: exception while reading line :内部エラー: 行の読込み中に例外が発生しました :
3018 : Error: Missing SysNode name declaration block.エラー: SysNode名declaration blockが不明です。
3021 : Error node : Missing token.エラー node : トークンが見つかりません。
3024 : Received Error: message.受信エラー: message
3026 : Error: Connection to node node failed.エラー: ノードnodeへの接続に失敗しました。
3028 : Error: Connections to all hosts failed.エラー: 全てのホストへの接続に失敗しました。
3030:Unable to shutdown RMS on node.nodeのRMSを停止できません。
3031:Error returned from hvshut. Click "msg" tab for details.hvshutからエラーが返されました。詳細は [msg] タブをクリックしてください。
3040:switchlog Error: R=null.switchlogエラー: R=null
3042:Error Intern: Nodecmd.exec() R==null.内部エラー: Nodecmd.exec() R==null
3043:Error: Remote connection to node failed, Exception: message.エラー: node へのリモート接続に失敗しました。例外: message
3044:Error: Invoking remote application command on node exited with the following error: message.エラー: node 上のリモートアプリケーション command の起動が、次のエラーで終了しました: message
3072:Error: SysNode node_name points to node_kind node_name.エラー: SysNode node_name はnode_kind node_name を示します。
3088:empty graph.グラフが空です。
3089:Graph has only number_of_nodes disjoint nodes and no arcs.グラフには number_of_nodes 個の孤立したグラフノードのみがあります。
3091:Application is inconsistent. Analyse configuration before applying any RMS operation.アプリケーションがInconsistent状態です。RMSの操作を行う前にRMS設定を調べてください。
3102:Internal Error: Unknown node type: node_type.内部エラー: 不明なノードタイプ : node_type
3130:Error in loading image: image_name.イメージのロード中にエラーが発生しました: image_name
3131=Fatal Error clusterTable: No clusterWide in clusterTable.clusterTableの致命的エラー : clusterTableにclusterWideがありません。
3132=Fatal Error clusterTable: No tableLayout in clusterTable.clusterTableの致命的エラー : clusterTableにtableLayoutがありません。
3133=Fatal Error clusterWide: no pointer to rmsCluster.clusterWideの致命的エラー: rmsClusterへのポインタがありません。
3134=Fatal Error clusterWide: no pointer to session.clusterWideの致命的エラー: セッションへのポインタがありません。
3135=Fatal Error clusterWide: update_display called without cluster_table.clusterWideの致命的エラー: update_displayの呼び出しにcluster_tableが指定されていません。
3136=Fatal Error nodeTable: No layout in nodeTable.nodeTableの致命的エラー: nodeTableにレイアウトがありません。
3137=Fatal Error nodeTable: Null value at row row and at column column.nodeTableの致命的エラー: 行 row 、列 column の値がnull値です。
3143=Error:  No output received from command rcinfo.エラー: コマンド rcinfoの出力がありません。
3156:Error: Unable to get HV_RCSTART value on node.エラー: node のHV_RCSTART値が取得できません。
3157:Error: Unable to get HV_AUTOSTARTUP value on node.エラー: node のHV_AUTOSTARTUP値が取得できません。
3158:Error: Unable to set HV_RCSTART value on node.エラー: node のHV_RCSTART値が設定できません。
3159:Error: Unable to set HV_AUTOSTARTUP value on node.エラー: node のHV_AUTOSTARTUP値が設定できません。
0701 There is not the fault resource.      故障しているリソースはありません。
0708 :proc1 finished.proc1 が完了しました。
0700 :The resource database is not configured. Please configure it by using [Tool] - [Initial setup] menu.リソースデータベースが設定されていません。[ツール] - [初期構成設定] でリソースデータベースの設定を行ってください。
0702 :The screen cannot be displayed from the main CRM window.CRM メインウィンドウからの表示は行えません。
0703 :Do you want to start up resource_name (rid=rid) ?resource_name(rid=rid) を起動しますか。
0704 :Do you want to stop resource_name (rid= rid)resource_name(rid=rid) を停止しますか。
0705 :Do you want to diagnose resource_name (rid=rid) ?resource_name(rid=rid) の診断を行いますか。
0707 :Do you want to begin the proc processing?proc を開始しますか。
0709 :The configuration change function cannot be used because it is being used by another task.構成を変更する機能が、他で操作中のため使用できません。
0710 :Processing cannot be ended because the following operation instruction is not completed.以下の操作指示が未完了のため、終了できません。
0711 :Can't get information from the resource database.構成情報を獲得できていません。
0712 :The resource database has already been configured.すでにリソースデータベースの設定が行われています。
0713 :The node which completed the settings of resource database exists.リソースデータベースが設定済みのノードが存在します。
0760 :A requested operation failed. (エラー詳細)操作指示が失敗しました。(エラー詳細)
0761 :An internal contradiction occurred in the main CRM window. (エラー詳細)CRM メインウィンドウで内部矛盾が発生しました。(エラー詳細)
0763 :The operation cannot be executed because the resource database is not configured on all nodes, or all nodes are not communicating with Web-Based Admin View.すべてのノードでリソースデータベースが設定されていない、または、Web-Based Admin View と接続されていないノードのため、操作を行うことができません。
0764 :An I/O error occurred.入出力エラーが発生しました。
0765 :Communication with the management server failed.管理サーバへのアクセス中に異常が発生しました。
0766 :The command terminated abnormally.コマンドが異常終了しました。
0767 :The command execution failed.コマンドの実行に失敗しました。
0768 :The processing for the proc1 cannot finish normally.proc1 が正常に終了しませんでした。
0769 :The processing was aborted because it could not be done on all nodes. (エラー詳細)処理を実行できないノードが存在するため、処理を終了します。(エラー詳細)
0773 :The initial setup of the resource database failed. (エラー詳細)リソースデータベースの初期構成設定に失敗しました。(エラー詳細)
0774 :Initial setup failed: the resource database could not be initialized.初期構成設定の初期化処理に失敗しました。
0775 :CF is not running, or CF is not configured.CF が構築されていない、または、CF が起動していません。
0790 The error occurred while collecting the fault resources.      故障リソースの収集に失敗しました。
0791 You are not the access authority which can reply to this message.      メッセージに応答できるユーザ権限ではありません。
0792 The error occurred while accessing the management server. Please select [Continue], and end the Resource Fault History.      管理サーバアクセス中にエラーが発生しました。「続行」を選択して、Resource Fault History を終了させてください。
0801  Do you want to exit userApplication Configuration Wizard GUI? 処理を終了しますか?
0802  Do you want to cancel the setup process? 処理を中断しますか?
0803  Do you want to register setup in a cluster system? 設定内容をシステムに登録しますか?
0805  GUI is generating RMS Configuration.RMS Configuration 情報を生成しています。
0807  Do you want to remove only selected userApplication (userApplication name)?       Do you want to remove all the resources under userApplication? 選択されているクラスタアプリケーション (userApplication name) だけを削除しますか? クラスタアプリケーション配下のすべてのリソースも削除しますか?
0808  Do you want to remove only selected Resource (resource name) and all the resources under Resource?選択されているリソース (resource name) およびリソース配下のすべてのリソースを削除しますか?
0810  Node name takeover is registered or removed in userApplication.  You need to restart SysNode to enable or disenable takeover network.  Restart SysNode after completing setup.ノード名引継ぎがクラスタアプリケーションに登録、または削除されました。引継ぎネットワークを有効、または無効にするためには SysNode の再起動が必要です。設定処理終了後、SysNode の再起動を行ってください。
0813  GUI is reading RMS Configuration. RMS Configuration 情報を読み込み中です。しばらくお待ちください。
0814  GUI is saving RMS Configuration in a system. RMS Configuration をシステムに反映中です。しばらくお待ちください。
0815  GUI is generating RMS Configuration. RMS Configuration を生成しています。しばらくお待ちください。
0816  Do you want to generate RMS Configuration? RMS Configuration 情報の生成を行いますか?
0817  Do you want to distribute RMS Configuration? RMS Configuration 情報の配布を行いますか?
0897 Now this configuration is set to use I/O fencing function.Confirm "I/O fencing" checkbox in [Attributes] tab to the setting status.I/O フェンシングを使用する設定にしました。設定状況は[Attributes]タブの「I/O フェンシング」チェックボックスを確認してください。
0830  Since other client is using userApplication Configuration Wizard GUI or the hvw(1M), GUI cannot be started. userApplication Configuration Wizard GUI や hvw コマンドが他のクライアントで使用中のため、起動できません。
0832  Cluster resource management facility is not running.  Since a list of candidate interfaces cannot be obtained, GUI is terminated.クラスタリソース管理機構が動作していません。インタフェースの候補一覧が取得できないため処理を終了します。
0833  RMS is running.  Since Configuration might not be saved, GUI is terminated.RMS が動作中です。構成情報が反映できない可能性があるため処理を終了します。
0834  An invalid character is included.不適切な文字が含まれています。
0835  Removing resource (resource name) will concurrently remove userApplication (userApplication name).  Do you want to continue?この Resource (resource name) を削除するとクラスタアプリケーション (userApplication name) も同時に削除されます。処理を続けますか?
0836  A name is not entered. 名前が設定されていません。
0837  A value is invalid.適切な数値ではありません。
0838  The specified takeover IP address is not available.指定された引継ぎ IP アドレスは使用できません。
0839  There is an incorrect setup. 設定した項目に誤りがあります。(誤りのある内容)
0840  The takeover network name has been defined.  Do you want to use the following definitions? 引継ぎネットワーク名の定義が既に設定されています。以下の定義値をそのまま使用しますか? (設定済の内容)
0841  There is an attribute different than the ones of other resources.  Do you want to continue?"他のリソースと設定値の異なる属性があります。処理を続行しますか?(設定値の異なる属性の情報)
0848  The file name is not specified.ファイル名が指定されていません。
0849  A required setup is missing.必要な項目が設定されていません。
0852  It is not a proper combination.適切な組み合せではありません。
0856  The selected userApplication or Resource cannot be edited.指定された名前が無効、または既に使用されています。
0857  The specified takeover Ipaddress or host name has been used.指定された引継ぎ IP アドレスまたは、ホスト名が既に使用されています。
0859  Invalid file name or path.指定されたファイルまたはディレクトリが見つかりません。
0860  The specified file exists. Do you want to replace it?指定されたファイルが既に存在します。置換えますか?
0861  The specified interface is different.  Do you want to set up IP address?指定されたインタフェースが異なります。IP アドレスを設定しますか?
0866  The file system has been used.ファイルシステムはすでに使用されています。
0867  Since a list of candidate interfaces cannot be obtained. The process is exited.インタフェースの候補一覧が取得できないため処理を終了します。
0868  It is not an executable file.実行可能なファイルではありません。
0898 "I/O fencing" checkbox is not selected.Do you want to continue the configuration setting ?If selecting "Yes", this configuration is registered without using I/O fencing function.If selecting "No", this configuration setting is cancelled.「I/O フェンシング」チェックボックスが選択されていません。設定を続行しますか?「はい」を選択するとI/O フェンシングを使用しない設定で登録します。「いいえ」を選択すると登録を中止します。
0900 The I/O fencing function does not work well on this configuration because the following settings are not done. - Set to use I/O fencing function of GDS - Set to use XSCF(SPARC M10/M12) shutdown agent or ICMP shutdown agentDo you want to continue this configuration setting ?If selecting "Yes", this configuration is registered as using I/O fencing function. After registering it, refer to the "PRIMECLUSTER Installation and Administration Guide" and make sure to execute above setting.If selecting "No",  this configuration setting is cancelled.以下の設定が行われていないため、本構成ではI/O フェンシングが正常に動作しません。設定を続行しますか?  ・GDSのI/Oフェンシング機能の設定  ・XSCF(SPARC M10/M12)もしくはICMPシャットダウンエージェントの設定「はい」を選択するとI/O フェンシングを使用する設定で登録します。登録後に"PRIMECLUSTER 導入運用手引書"を参照して、上述の設定を必ず行ってください。「いいえ」を選択すると登録を中止します。
0901 Two or more userApplications can not work in this configuration.Do you want to continue the configuration setting ?If selecting "Yes", this userApplication setting is proceeded.After creating this userApplication, change the shutdown agent to use as XSCF(SPARC M10/M12).If you can not change the shutdown agent, select "No" to cancel the userApplication setting.本構成では2つ以上のuserApplicationを動作させることはできません。設定を続行しますか?「はい」を選択するとuserApplicationの設定を続行します。userApplicationの作成後、使用するシャットダウンエージェントをXSCF(SPARC M10/M12) に変更してください。シャットダウンエージェントを変更できない場合は、「いいえ」を選択して作成を中止してください。
0880  A non-classified error occurred.未分類のエラーが発生しました。(サーバからのエラー)
0881  Connection to the server failed.サーバとの通信に失敗しました。
0882  A non-supported package is installed.   Check the version.未サポートのパッケージがインストールされています。パッケージのバージョン情報を確認してください。(詳細情報)
0883  Since the specified file is in the non-supported format, it cannot be edited.指定されたファイルは、未サポートのフォーマットであるため、編集できません。
0886  Since a list of candidate interfaces that can set in Resource is not acquired, the process is exited.リソースに設定することができるインタフェースの候補一覧を取得できませんでした。処理を終了します。
0888  The command is abnormally terminated.コマンドが異常終了しました。(コマンドからのメッセージ出力)
0889  The command execution failed.コマンドの実行に失敗しました。(コマンドからのメッセージ出力)
0890  The SysNode for executing a command cannot be found.コマンドを実行する SysNode が見つかりません。
0891  Reading RMS Configuration failed. RMS Configuration 情報の読み込みが失敗しました。
0893  RMS Configuration generation failed.RMS Configuration 情報の生成が失敗しました。(コマンドからのメッセージ出力)
0895  RMS Configuration distribution failed.RMS Configuration 情報の配布が失敗しました。(コマンドからのメッセージ出力)
0899 You can not use I/O fencing function on this configuration.You must satisfy the following conditions. - Register Gds resource(s) - The number of SysNode is set to 2. 本構成では、I/O フェンシングを使用することはできません。以下の条件を満たしてください。・Gdsリソースを登録する・SysNode数を2にする
0100 Cluster configuration management facility terminated abnormally.クラスタ制御の構成管理機構が異常終了しました。
0101 Initialization of cluster configuration management facility terminated abnormally.クラスタ制御の構成管理機構の初期化処理が異常終了しました。
1421 The userApplication "userApplication" did not start automatically because not all of the nodes where it can run are online.Do you want to force the userApplication online on the SysNode "SysNode"?Message No.: numberDo you want to do something? (yes/no)Warning: Forcing a userApplication online ignores potential error conditions.  Used improperly, it can result in data corruption.  You should not use it unless you are certain that the userApplication is not running anywhere in the cluster.userApplication "userApplication" は、userApplicationを構成するすべてのSysNodeが所定時間内に起動しなかったため、自動起動しませんでした。userApplicationをSysNode "SysNode" で強制起動しますか? (yes/no) メッセージ番号:number警告: userApplicationの強制起動は安全性チェックが無効になります。使い方を誤ると、データが破損したり整合性が失われる場合があります。強制起動するuserApplicationが、クラスタ内でオンラインでないことを確認した上で実行してください。
1421 The userApplication "userApplication" did not start automatically because not all of the nodes where it can run are online.Forcing the userApplication online on the SysNode "SysNode" is possible.Warning: When performing a forced online, confirm that RMS is started on all nodes in the cluster, manually shutdown any nodes where it is not started and then perform it.For a forced online, there is a risk of data corruption due to simultaneous access from several nodes.In order to reduce the risk, nodes where RMS is not started maybe forcibly stopped.Are you sure wish to force online? (no/yes)Message No.: numberクラスタアプリケーション "userApplication" は、クラスタアプリケーションを構成するすべての SysNode が所定時間内に起動しなかったため、自動起動しませんでした。クラスタアプリケーションを SysNode "SysNode" で強制起動できます。警告: 強制起動を行う場合、クラスタを構成するすべてのノードでRMSが起動しているか確認し、起動していないノードは、手動でシャットダウンしてから行ってください。強制起動では、複数ノードからの同時アクセスによるデータ破損のリスクがあります。そのリスクを低減するため、RMSが起動していないノードを強制停止する場合があります。強制起動してもよろしいですか? (no/yes) メッセージ番号:number
1422 On the SysNode "SysNode", the userApplication "userApplication" is in the Faulted state due to a fault in the resource "resource".  Do you want to clear fault?Message No.: numberDo you want to do something? (yes/no)SysNode "SysNode" のクラスタアプリケーション "userApplication" はリソース "resource" が故障したため Faulted 状態です。Faulted 状態をクリアしますか。(yes/no) メッセージ番号:number
1423 On the SysNode "SysNode", the userApplication "userApplication" has the faulted resource "resource".  The userApplication "userApplication" did not start automatically because not all of the nodes where it can run are online.Do you want to force the userApplication online on the SysNode "SysNode"?Message No.: numberDo you want to do something? (yes/no)Warning: Forcing a userApplication online ignores potential error conditions.  Used improperly, it can result in data corruption.  You should not use it unless you are certain that the userApplication is not running anywhere in the cluster.SysNode "SysNode" のuserApplication "userApplication" は、リソース "resource" が故障しています。userApplication "userApplication" は、すべてのSysNodeが所定時間内に起動しなかったため、自動起動しませんでした。userApplicationをSysNode "SysNode" で強制起動しますか? (yes/no) メッセージ番号:number警告: userApplicationの強制起動は安全性チェックが無効になります。使い方を誤ると、データが破損したり整合性が失われる場合があります。強制起動するuserApplicationが、クラスタ内でオンラインでないことを確認した上で実行してください。
1423 On the SysNode "SysNode", the userApplication "userApplication" has the faulted resource "resource".  The userApplication "userApplication" did not start automatically because not all of the nodes where it can run are online.Forcing the userApplication online on the SysNode "SysNode" is possible.Warning: When performing a forced online, confirm that RMS is started on all nodes in the cluster, manually shutdown any nodes where it is not started and then perform it. For a forced online, there is a risk of data corruption due to simultaneous access from several nodes. In order to reduce the risk, nodes where RMS is not started maybe forcibly stopped.Are you sure wish to force online? (no/yes)Message No.: numberSysNode "SysNode" のクラスタアプリケーション "userApplication" は、リソース "resource" が故障しています。クラスタアプリケーション "userApplication" は、すべての SysNode が所定時間内に起動しなかったため、自動起動しませんでした。クラスタアプリケーションを SysNode "SysNode" で強制起動できます。警告: 強制起動を行う場合、クラスタを構成するすべてのノードでRMSが起動しているか確認し、起動していないノードは、手動でシャットダウンしてから行ってください。強制起動では、複数ノードからの同時アクセスによるデータ破損のリスクがあります。そのリスクを低減するため、RMSが起動していないノードを強制停止する場合があります。強制起動してもよろしいですか? (no/yes) メッセージ番号:number
2100 The resource data base has already been set. (detail:code1-code2) リソースデータベースはすでに設定されています。(detail:code1-code2)
2200 Cluster configuration management facility initialization started. クラスタ制御の構成管理機構の初期化処理を開始しました。
2201 Cluster configuration management facility initialization completed. クラスタ制御の構成管理機構の初期化処理を完了しました。
2202 Cluster configuration management facility exit processing started. クラスタ制御の構成管理機構の停止処理を開始しました。
2203 Cluster configuration management facility exit processing completed. クラスタ制御の構成管理機構の停止処理を完了しました。
2204 Cluster event control facility started. クラスタ制御のイベント制御機構を開始しました。
2205 Cluster event control facility stopped. クラスタ制御のイベント制御機構を終了しました。
2206  The process (count: appli) was restarted.プロセス (count: appli) を再起動しました。
2620 On the SysNode " SysNode ", the userApplication " userApplication " transitioned to the state state . Therefore, message " number " has been canceled.SysNode "SysNode" のクラスタアプリケーション "userApplication" が state 状態となったためメッセージ (メッセージ番号: number ) を取り消しました。
2621 The response to the operator intervention message " number " was action.オペレータ介入要求メッセージ (メッセージ番号: number ) に対し action が応答されました。
2622 There are no outstanding operator intervention messages.オペレータ介入要求メッセージは存在しません。
2700 The resource fail has recovered.SysNode:SysNode    userApplication:userApplication Resorce:resourceリソース故障が回復しました。SysNode:SysNodeuserApplication:userApplication Resource:resource
2701 A failed resource has recovered. SysNode:SysNodeSysNode 故障が回復しました。SysNode:SysNode
2914 A new disk device(disk ) was found.ディスク装置 (disk) を新規に検出しました。
2927 A node (node) detected an additional disk. (disk)ノード (node) でディスク装置を新規に検出しました。(disk)
3040 The console monitoring agent has been started. (node:nodename)コンソール非同期監視機能を開始しました。 (node:nodename)
3041 The console monitoring agent has been stopped. (node:nodename) コンソール非同期監視機能を停止しました。 (node:nodename)
3042 The RCI monitoring agent has been started. RCI 非同期監視機能を開始しました。
3043 The RCI monitoring agent has been stopped. RCI 非同期監視機能を停止しました。
3044 The console monitoring agent took over monitoring Node targetnode. コンソール非同期監視機能の監視対象にノード targetnode を追加しました。
3045 The console monitoring agent cancelled to monitor Node targetnode. コンソール非同期監視機能の監視対象からノード targetnode を削除しました。
3046 The specified option is not registered because it is not required for device. (option:option) 指定されたオプションは、device には必要ないので登録しませんでした。(option:option)
3050 Patrol monitoring started.パトロール診断を開始しました。
3051 Patrol monitoring stopped.パトロール診断を終了しました。
3052 A failed LAN device is found to be properly running as a result of hardware diagnostics. (device:altname rid:rid) 故障中の LAN デバイスはハード診断の結果、正常に動作しています。( device:altname rid:rid )
3053 A failed shared disk unit is found to be properly running as a result of hardware diagnostics. (device:altname rid:rid)故障中の共用装置はハード診断の結果、正常に動作しています。( device:altname rid:rid )
3070  "Wait-For-PROM" is enable on the node. (node:nodename)"Wait-For-PROM" 機能は本ノードにおいて有効となりました。(node:nodename)
3071  "Wait-For-PROM" of the console monitoring agent is enable on the node. (node:nodename)コンソール非同期監視の "Wait-For-PROM" 機能は本ノードにおいて有効となりました。(node:nodename)
3080 The MMB monitoring agent has been started.MMB 非同期監視を開始しました。
3081 The MMB monitoring agent has been stopped.MMB 非同期監視を停止しました。
3082 MMB has been recovered from the failure. (node:nodename mmb_ipaddress1:mmb_ipaddress1 mmb_ipaddress2:mmb_ipaddress2 node_ipaddress1:node_ipaddress1 node_ipaddress2:node_ipaddress2)MMB 非同期監視を復旧しました。(node:nodename mmb_ipaddress1:mmb_ipaddress1 mmb_ipaddress2:mmb_ipaddress2 node_ipaddress1:node_ipaddress1 node_ipaddress2:node_ipaddress2)
3083 Monitoring another node has been started.他ノードの非同期監視を開始しました。
3084 Monitoring another node has been stopped.他ノードの非同期監視を停止しました。
3085 The MMB IP address or the Node IP address has been changed. (mmb_ipaddress1:mmb_ipaddress1 mmb_ipaddress2:mmb_ipaddress2 node_ipaddress1:node_ipaddress1 node_ipaddress2:node_ipaddress2)MMBの IP アドレス、または、自管理 IP アドレスが変更されています。(mmb_ipaddress1:mmb_ipaddress1 mmb_ipaddress2:mmb_ipaddress2 node_ipaddress1:node_ipaddress1 node_ipaddress2:node_ipaddress2)
3110 The SNMP monitoring agent has been started.SNMP非同期監視を開始しました。
3111 The SNMP monitoring agent has been stopped.SNMP非同期監視を停止しました。
3120 The iRMC asynchronous monitoring agent has been started.iRMC非同期監視を開始しました。
3121 The iRMC asynchronous monitoring agent has been stopped.iRMC非同期監視を停止しました。
3122 MMB has been recovered.(node:nodename mmb_ipaddress1:mmb_ipaddress1 mmb_ipaddress2:mmb_ipaddress2 node_ipaddress:node_ipaddress)MMBが復旧しました。(node:nodename mmb_ipaddress1:mmb_ipaddress1 mmb_ipaddress2:mmb_ipaddress2 node_ipaddress:node_ipaddress)
3123 iRMC has been recovered.(node:nodename irmc_ipaddress:irmc_ipaddress node_ipaddress:node_ipaddress)iRMCが復旧しました。(node:nodename irmc_ipaddress:irmc_ipaddress node_ipaddress:node_ipaddress)
3124 The node status is received.(node:nodename from:irmc/mmb_ipaddress)ノードの状態を受信しました。(node:nodename from:irmc/mmb_ipaddress)
3200 Cluster resource management facility initialization started. クラスタリソース管理機構の初期化処理を開始しました。
3201 Cluster resource management facility initialization completed. クラスタリソース管理機構の初期化処理を完了しました。
3202 Cluster resource management facility exit processing completed.クラスタリソース管理機構の停止処理を完了しました。
3203 Resource activation processing started.リソースの活性処理を開始します。
3204 Resource activation processing completed.リソースの活性処理を完了しました。
3205 Resource deactivation processing started.リソースの非活性処理を開始します。
3206 Resource deactivation processing completed.リソースの非活性処理を完了しました。
2207  Process (appli) has stopped.プロセス (appli) が停止しました。
4250 The line switching unit cannot be found because FJSVclswu is not installedFJSVclswu がインストールされていないため、回線切替装置を検出することができません。
5001 The RCI address has been changed. (node:nodename address:address) RCI アドレスが変更されています。 (node:nodename address:address)
5021 An error has been detected in part of the transmission route to MMB. (node:nodename mmb_ipaddress1:mmb_ipaddress1 mmb_ipaddress2:mmb_ipaddress2 node_ipaddress1:node_ipaddress1 node_ipaddress2:node_ipaddress2)MMB の通信経路の片系の異常を検出しました。(node:nodename mmb_ipaddress1:mmb_ipaddress1 mmb_ipaddress2:mmb_ipaddress2 node_ipaddress1:node_ipaddress1 node_ipaddress2:node_ipaddress2)
5100 An error was detected in the failover unit of the line switching unit. (RCI:addr LSU:mask status:status type:type)回線切替装置の切替ユニットの異常を検出しました。(RCI:addr LSU:mask status:status type:type)
5200 There is a possibility that the resource controller does not start.(ident::ident command:command, ....)リソースコントローラが起動していない可能性があります。(ident:ident command:command, ...)
7130  The specified resource ID (rid ) cannot be deleted because it is being used.指定されたリソース ID (rid ) は使用中のため削除できません。
ccmtrcstr: FJSVclerr Onltrc start fail
????  Message not found!!
0102 A failure occurred in the server. It will be terminated. ノードで異常が発生したため強制停止します。
6000  An internal error occurred. (function:function detail:code1-code2-code3-code4) 内部異常が発生しました。(function:function detail:code1-code2-code3-code4)
6001  Insufficient memory. (detail:code1-code2)メモリ資源が不足しています。(detail:code1-code2)
6002  Insufficient disk or system resources. (detail:code1-code2)ディスク資源またはシステム資源が不足しています。(detail:code1-code2)
6003  Error in option specification. (option:option)オプションに誤りがあります。(option:option)
6004  No system administrator authority.システム管理者権限ではありません。
6005  Insufficient shared memory. (detail:code1-code2)共用メモリ資源が不足しています。(detail:code1-code2)
6006  The required option option must be specified.必須オプション option を指定してください。
6007  One of the required options (option) must be specified.必須オプション option のいずれかを指定してください。
6008  If option option1 is specified, option option2 is required.オプション option1 指定時はオプション option2 は必須です。
6009 If option option1 is specified, option option2 cannot be specified.オプション option1 指定時はオプション option2 は指定できません。
6010  If any one of the options option1 is specified, option option2 cannot be specified.オプション option1 のいずれかの指定時はオプション option2 は指定できません。
6021 The option option(s) must be specified in the following order:orderオプション option は order の順で指定してください。
6025 The value of option option must be specified from value1 to value2オプション option の値は value1 から value2 の範囲で指定してください。
6200 Cluster configuration management facility:configuration database mismatch. (name:name node:node(node-number))クラスタ制御の構成管理機構でクラスタ構成データベースの不一致が発生しました。(name:name node:node(node-number))
6201 Cluster configuration management facility:internal error. (node:node code:code)クラスタ制御の構成管理機構で内部異常が発生しました。(node:node code:code)
6202 Cluster event control facility:internal error. (detail:code1-code2)クラスタ制御のイベント制御機構で内部異常が発生しました。(detail:code1-code2)
6203 Cluster configuration management facility: communication path disconnected.クラスタ制御の構成管理機構で他ノードへの通信経路が切断されました。
6204 Cluster configuration management facility has not been started.クラスタ制御の構成管理機構が起動していません。
6206 Cluster configuration management facility:error in definitions used by target command.クラスタ制御の構成管理機構で使用する command コマンドの定義情報に誤りがあります。
6207 Cluster domain contains one or more inactive nodes.クラスタドメインを構成するノードの中に起動していないノードがあります。
6208 Access denied ( command ).アクセス権がありません。(target)
6209 The specified file or cluster configuration database does not exist (target).指定されたファイルまたはクラスタ構成データベースが存在しません。(target)
6210  The specified cluster configuration database is being used (table).指定されたクラスタ構成データベースは現在使用中です。(table)
6211 A table with the same name exists (table).同一名のクラスタ構成データベースが存在しています。(table)
6212 The specified configuration change procedure is already registered (proc).指定された構成変更プロシジャはすでに登録されています。(proc)
6213 The cluster configuration database contains duplicate information.クラスタ構成データベース内に同一情報があります。
6214 Cluster configuration management facility:configuration database update terminated abnormally (target).クラスタ制御の構成管理機構でクラスタ構成データベースの反映処理が異常終了しました。(target)
6215 Cannot exceed the maximum number of nodes.最大構成ノード数以上のノード追加は行えません。
6216 Cluster configuration management facility:configuration database mismatch occurred because another node ran out of memory. (name:name node:node)他ノードのメモリ資源不足により、クラスタ制御の構成管理機構でクラスタ構成データベースの不一致が発生しました。(name:name node:node)
6217 Cluster configuration management facility:configuration database mismatch occurred because another node ran out of disk or system resources. (name:name node:node)他ノードのディスク資源またはシステム資源不足により、クラスタ制御の構成管理機構でクラスタ構成データベースの不一致が発生しました。(name:name node:node)
6218 An error occurred during distribution of file to the stopped node. (name:name node:node errno:errno)停止中ノードへのファイルの配付処理で異常が発生しました。(name:name node:node errno:errno)
6219 The cluster configuration management facility cannot recognize the activating node. (detail:code1-code2)クラスタ制御の構成管理機構で起動ノードが認識できません。(detail:code1-code2)
6220 The communication failed between nodes or processes in the cluster configuration management facility. (detail:code1-code2)クラスタ制御の構成管理機構でノード間通信またはプロセス間通信ができません。(detail:code1-code2)
6221 Invalid kernel parameter used by cluster configuration database. (detail:code1-code2)クラスタ制御で使用するカーネルパラメタの設定に誤りがあります。 (detail:code1-code2)
6222 The network service used by the cluster configuration management facility is not available. (detail:code1-code2)クラスタ制御の構成管理機構で使用するネットワークサービスがありません。(detail:code1-code2)
6223 A failure occurred in the specified command. (command: command , detail:code1-code2)指定されたコマンドで異常が発生しました。(command: command , detail:code1-code2)
6226 The kernel parameter setup is not sufficient to operate the cluster control facility. (detail:code)クラスタ制御で使用するカーネルパラメタの設定値が不足しています。(detail:code)
6250 Cannot run this command because FJSVclswu is not installed.FJSVclswu がインストールされていないため、本コマンドは実行できません。
6300 Failed in setting the resource data base. (detail:code1-code2)リソースデータベースの設定に失敗しました。(detail:code1-code2)
6302 Failed to create a backup of the resource database information. (detail:code1-code2)リソースデータベースの資産退避に失敗しました。(detail:code1-code2)
6303 Failed restoration of the resource database information. (detail:code1- code2)リソースデータベースの資産復元に失敗しました。(detail:code1-code2)
6600 Cannot manipulate the specified resource. (insufficient user authority)指定されたリソースは操作できません。(ユーザ権限なし)
6601 Cannot delete the specified resource. (resource:resource rid:rid)指定されたリソースは削除できません。(リソース:resource rid:rid)
6602  The specified resource does not exist. (detail:code1-code2)指定されたリソースは存在しません。(detail:code1-code2)
6603 The specified file does not exist.指定されたファイルは存在しません。
6604 The specified resource class does not exist.指定されたリソースクラスは存在しません。
6606 Operation cannot be performed on the specified resource because the corresponding cluster service is not in the stopped state. (detail:code1- code2)指定されたリソースはサービスが停止中でないため操作できません。(detail:code1-code2)
6607 The specified node cannot be found.指定されたノードは存在しません。
6608 Operation disabled because the resource information of the specified resource is being updated. (detail:code1-code2)リソースの情報が更新中のため操作できません。(detail:code1-code2)
6611 The specified resource has already been registered. (detail:code1-code2)指定されたリソースはすでに登録されています。(detail:code1-code2)
6614 Cluster configuration management facility:internal error. (detail:code1- code2)クラスタ制御の構成管理機構で内部異常が発生しました。(detail:code1-code2)
6615 The cluster configuration management facility is not running.(detail:code1-code2 )クラスタ制御の構成管理機構が動作していません。(detail:code1-code2 )
6616 Cluster configuration management facility: error in the communication routine.(detail:code1-code2 )クラスタ制御の構成管理機構の通信処理で異常が発生しました。(detail:code1-code2 )
6617  The specified state transition procedure file does not exist.指定された状態遷移プロシジャファイルは存在しません。
6618 The state transition procedure file could not be written. A state transition procedure file with the same name already exists.状態遷移プロシジャファイルの格納に失敗しました。同一名の状態遷移プロシジャファイルがすでに存在しています。
6619 The state transition procedure file could not be written. There was an error in the resource class specification.状態遷移プロシジャファイルの格納に失敗しました。リソースクラスの指定に誤りがあります。
6621  Could not perform file operation on state transition procedure file. (detail:code1-code2)状態遷移プロシジャファイルの操作に失敗しました。(detail:code1-code2)
6623  Cannot delete the specified state transition procedure file.指定された状態遷移プロシジャファイルは削除できません。
6624  The specified resource does not exist in cluster service. (resource:resource rid:rid )指定されたリソースはクラスタサービスに存在しません。(リソース: resource rid:rid )
6651 The specified instruction contains an error.指定された指示に誤りがあります。
6653 Operation cannot be performed on the specified resource.指定されたリソースは操作できません。
6655  Use the absolute path to specify the option (option).オプション option は絶対パスで指定してください。
6657  The specified resource is not being monitored. (detail:code)指定されたリソースは監視されていません。(detail:code)
6658  The specified process does not exist. (pid:pid )指定されたプロセスは存在しません。(pid:pid )
6659  The specified command does not exist. (command:command )指定されたコマンドは存在しません。(command:command )
6661 Cluster control is not running. (detail:code)クラスタ制御が動作していません。(detail:code)
6662  A timeout occurred in process termination. (detail:code1-code2)プロセスの終了待ち合わせ処理でタイムアウトが発生しました。(detail:code1-code2)
6665 The directory was specified incorrectly.ディレクトリの指定に誤りがあります。
6668 Cannot run this command in single-user mode.シングルユーザモードのため、本コマンドは実行できません。
6675 Cannot run this command because product_name has already been set up.product_name の設定が行われているため、本コマンドは実行できません。
6680  The specified directory does not exist.指定されたディレクトリは存在しません。
6690 The specified userApplication or resource is not monitored. ( resource )指定されたクラスタアプリケーションまたはリソースは監視されていません。(resource )
6691 The userApplication cannot do the patrol monitoring because of status .クラスタアプリケーションが status のため、パトロール診断はできません。
6692 Patrol monitoring timed out.パトロール診断でタイムアウトが発生しました。
6750 A resource has faulted.SysNode:SysNode userApplication:userApplication Resorce:resourceリソース故障が発生しました。SysNode:SysNode userApplication:userApplication Resource:resource
6751 A SysNode has faulted. SysNode:SysNodeSysNode 故障が発生しました。SysNode:SysNode
6752 The processing was canceled due to the following error.      Error message from RMS command以下の異常のため処理を終了します。RMS コマンドから出力されるエラーメッセージ
6753 Failed to process the operator intervention message due to the following       error.(message number:number response:action command :command )      Error message from RMS command以下の異常のためオペレータ介入要求メッセージの応答に失敗しました。(メッセージ番号: number 応答: action コマンド: command ) RMS コマンドから出力されるエラーメッセージ
6754 The specified message number ( number ) does not exist.指定されたメッセージ番号 (No.number) は存在しません。
6755 Failed to respond to the operator intervention message due to the SysNode (SysNode) stop.(message number:number response : action)SysNode(SysNode) が停止したためオペレータ介入要求メッセージの応答に失敗しました。(メッセージ番号: number 応答: action )
6780  Cannot request to the process monitoring daemon.プロセス監視機能への処理要求ができません。
6781  The process (appli) cannot be monitored because the process hasn't made a process group at starting.プロセス (appli) は、プロセスの開始時にプロセスグループを作成しなかったため監視できません。
6782  The process(appli) was not able to be executed. (errno:error)プロセス (appli) の起動に失敗しました。(errno:error)
6807 Disk device (NodeID NodeID , disk ) cannot be detected.ディスク装置 (ノード識別番号 NodeID 、disk) が検出できません。
6817  An error occurred during state transition procedure execution.       (error procedure:procedure detail:code1-code2-code3-code4-code5-code6-code7)状態遷移プロシジャの実行で異常が発生しました。(error procedure:procedure detail:code1-code2-code3-code4-code5-code6-code7)
6836 The disk device (NodeID  NodeID , disk  ) has changed.ディスク装置 (ノード識別番号 NodeID 、disk) が以前と異なっています。
6900 Automatic resource registration processing terminated abnormally. (detail:reason)自動リソース登録が異常終了しました。(detail: reason)
6901 Automatic resource registration processing is aborted due to one or more of the stopping nodes in the cluster domain.クラスタドメイン内に停止中のノードが存在するため、自動リソース登録を中止しました。
6902 Automatic resource registration processing is aborted due to cluster domain configuration manager not running.クラスタ制御の構成管理機構が動作していないため自動リソース登録を中止します。
6903 Failed to create logical path. (node dev1 dev2)論理パスの作成に失敗しました。(node dev1 dev2)
6904 Fail to register resource. (detail:reason)リソースの登録に失敗しました。(detail: reason)
6905 Automatic resource registration processing is aborted due to mismatch instance number of logical device between nodes.論理パスのインスタンス番号がノード間で異なっているため自動リソース登録を中止します。
6906 Automatic resource registration processing is aborted due to mismatch setting of disk device path between nodes.ディスク装置の設定がノード間で異なっているため自動リソース登録を中止します。
6907 Automatic resource registration processing is aborted due to mismatch construction of disk device between nodes.ディスク装置の構成に矛盾があるために自動リソース登録を中止します。
6910 It must be restart the specified node to execute automatic resource registration. (node:node_name...)クラスタ自動リソース登録を行うにはノードの再起動が必要です。(node: node_name ...)
6911 It must be matched device number information in all nodes of the cluster system executing automatic resource registration. (dev:dev_name...)クラスタ自動リソース登録を行うには全ノードで装置情報を一致化させる必要があります。(dev: dev_name ...)
7003 An error was detected in RCI. (node:nodename address:address status:status)RCI の異常を検出しました。 (node:nodename address:address status:status)
7004 The RCI monitoring agent has been stopped due to an RCI address error. (node:nodename address:address)RCI アドレス異常のためRCI 非同期監視機能を停止します。(node:nodename address:address)
7012  Hardware error occurred in RCI setup.RCIの設定処理でハードウェアエラーが発生しました。
7018 The console monitoring agent has been started.コンソール非同期監視機能は既に起動されています。
7019 The RCI monitoring agent has already been started.RCI 非同期監視機能は既に起動されています。
7026 HCP is not supported. (version:version).HCP の版数がサポートされていない版数です。(version:version)
7027 The XSCF is not supported.XSCF がサポートされていません。
7030 CF is not running.CF が動作していません。
7031 Cannot find the HCP version.HCP の版数を取得できません。
7033 Cannot find the specified CF node name.(nodename:nodename).指定された CF ノード名は存在しません。(nodename:nodename)
7034 The console information is not set.(nodename:nodename)コンソール情報が登録されていません。(nodename:nodename)
7035 An address error is detected in RCI. (node:nodename address:address)RCI アドレス異常を検出しました。(node:nodename address:address)
7036  The RCI is not supported.RCIがサポートされていません
7037 The SNMP information is not set.(nodename:nodename)SNMP情報が登録されていません。(nodename:nodename)
7040 The console was disconnected. (node:nodename portno:portnumber detail:code)コンソール接続ができなくなりました。(node:nodename portno:portnumber detail:code)
7042 Connection to the console is refused. (node:nodename portno:portnumber detail:code)コンソールへの接続ができません。(node:nodename portno:portnumber detail:code)
7043 First SSH connection to the ILOM has not been done yet. (node:nodename ipaddress:ipaddress detail:code)ILOMに対してのSSHによる事前接続が完了していません。(node:nodename ipaddress:ipaddress detail:code)
7050 A failure is detected in a LAN device as a result of hardware diagnostics. (node:nodename device:altname rid:rid detail:code )LAN デバイスへハード診断を実施した結果、故障と判定しました。( node:nodename device:altname rid:rid detail:code)
7051 A network device monitoring command is abnormally terminated as a result of diagnosing a LAN device.(node:nodename device:altname rid:rid detail:code )LAN デバイスへハード診断を実施した結果、ネットワークデバイス診断コマンドが異常終了しました。( node:nodename device:altname rid:rid detail:code)
7052 A failure of the shared disk device is detected as a result of the hardware diagnostics. (node:nodename device:altname rid:rid detail:code )共用装置へハード診断を実施した結果、故障と判定しました。( node:nodename device:altname rid:rid detail:code)
7053 A disk monitoring command is abnormally terminated as a result of the hardware diagnostics. (node:nodename device:altname rid:rid detail:code )共用装置へハード診断を実施した結果、ディスク診断コマンドが異常終了しました。( node:nodename device:altname rid:rid detail:code)
7054 A designated device cannot be opened as a result of diagnosing the shared disk device. (node:nodename device:altname rid:rid detail:code )共用装置へハード診断を実施した結果、指定されたデバイスのオープンに失敗しました。( node:nodename device:altname rid:rid detail:code)
7055 The designated LAN device cannot be found as a result of the hardware diagnostics. (node:nodename device:altname rid:rid detail:code )LAN デバイスへハード診断を実施した結果、指定された LAN デバイスが見つかりません。( node:nodename device:altname rid:rid detail:code)
7056 The flag settings of the activated LAN device is found improper as a result of the hardware diagnostics. (node:nodename device:altname rid:rid detail:code )LAN デバイスへハード診断を実施した結果、指定された LAN デバイスの活性時のフラグが不適当です。 ( node:nodename device:altname rid:rid detail:code)
7101  SCF cannot be accessed because it is in the busy state. (type:type)SCF がビジー状態のためアクセスできません。(type:type)
7102  SCF open failed. (errno:errno)SCF のオープンに失敗しました。(errno:errno)
7103  SCF access failed. (errno:errno)SCF のアクセスに失敗しました。(errno:errno)
7104  The subclass of the line switching unit cannot be identified. (RCI:addr Subclass:no)回線切替装置のサブクラスが不明です。(RCI:addr Subclass:no)
7105  The specified line switching unit does not exist. (RCI:addr)回線切替装置が存在しません。(RCI:addr)
7106  The power to the line switching unit is not on, or the RCI cable has been disconnected. (RCI:addr)回線切替装置の電源か RCI ケーブルが入っていません。(RCI:addr)
7108  Reservation of the line switching device failed. (RCI:addr LSU:mask retry:no)回線切替装置のリザーブに失敗しました。(RCI:addr LSU:mask retry:no)
7109  An error was detected in the switching control board of the line switching unit. (RCI:addr status:status type:type)回線切替装置の切替制御ボードの異常を検出しました。(RCI:addr status:status type:type)
7110  An error was detected in the switching unit of the line switching unit. (RCI:addr LSU:mask status:status type:type)回線切替装置の切替ユニットの異常を検出しました。(RCI:addr LSU:mask status:status type:type)
7111  The cluster event control facility is not running. (detail:code1-code2)クラスタ制御のイベント制御機構が動作していません。(detail:code1-code2)
7112  Communication failed in the cluster event control facility (detail:code1-code2)クラスタ制御のイベント制御機構で通信に失敗しました。(detail:code1-code2)
7113  Cluster event control facility: internal error. (detail:code1-code2)クラスタ制御のイベント制御機構で内部異常が発生しました。(detail:code1-code2)
7116  Port number information is not set for resource SWLine. (rid:rid )リソース SWLine にポート番号情報が設定されていません。(rid:rid )
7117  The port number specified for resource SWLine is incorrect. (rid:rid port:port)リソース SWLine のポート番号が誤っています。(rid:rid port:port)
7119  The LSU mask information has not been set for the shared resource SH_SWLine. (rid:rid )共用リソース SH_SWLine に LSU マスク情報が設定されていません。(rid:rid )
7121  The parent resource of the shared resource SH_SWLine is a resource other than the shared resource SH_SWU. (rid:rid )共用リソース SH_SWLine の親リソースが共用リソース SH_SWU 以外です。(rid:rid )
7122  The RCI address information has not been set for the shared resource SH_SWU. (rid:rid )共用リソース SH_SWU に RCI アドレス情報が設定されていません。(rid:rid )
7125  The resource ID of the node connected to the specified port no (rid: rid ) is incorrect.指定されたポート no 側に接続するノードのリソース ID (rid) は正しくありません。
7126  The resource ID (rid ) of the same node is specified for ports 0 and 1.ポート 0 側と 1 側に同じノードのリソース ID (rid) は指定できません。
7131  The specified resource ID (rid ) is not present in the shared resource class (class).指定されたリソースID (rid) は共用リソースクラス (class) に存在しません。
7132  The specified resource name (name) is not present in the shared resource class (class).指定されたリソース名 (name) は共用リソースクラス (class) に存在しません。
7200 The configuration file of the console monitoring agent does not exist. (file:filename)コンソール非同期監視機能の設定ファイルが存在しません。(file:filename)
7201 The configuration file of the RCI monitoring agent does not exist. (file:filename)RCI 非同期監視機能の設定ファイルが存在しません。(file:filename)
7202 The configuration file of the console monitoring agent has an incorrect format. (file:filename)コンソール非同期監視機能の設定ファイルの形式に誤りがあります。(file:filename)
7203 The username or password to login to the control port of the console is incorrect.コンソールの制御ポートへログインするためのユーザ名、または、パスワードの設定に誤りがあります。
7204 Cannot find the console's IP address. (nodename:nodename detail:code) .コンソールの IP アドレスを取得できません。(nodename:nodename detail:code)
7210 An error was detected in MMB. (node:nodename mmb_ipaddress1:mmb_ipaddress1 mmb_ipaddress2:mmb_ipaddress2 node_ipaddress1:node_ipaddress1 node_ipaddress2:node_ipaddress2 status:status detail:detail)MMB の異常を検出しました。(node:nodename mmb_ipaddress1:mmb_ipaddress1 mmb_ipaddress2:mmb_ipaddress2 node_ipaddress1:node_ipaddress1 node_ipaddress2:node_ipaddress2 status:status detail:detail)
7211 The MMB monitoring agent has already been started.MMB 非同期監視機能はすでに起動されています。
7212 The MMB information is not set. (nodename:nodename)MMB 情報が登録されていません。(nodename:nodename)
7213 An error has been detected in the transmission route to MMB. (node:nodename mmb_ipaddress1:mmb_ipaddress1 mmb_ipaddress2:mmb_ipaddress2 node_ipaddress1:node_ipaddress1 node_ipaddress2:node_ipaddress2)MMB の通信経路の異常を検出しました。(node:nodename mmb_ipaddress1:mmb_ipaddress1 mmb_ipaddress2:mmb_ipaddress2 node_ipaddress1:node_ipaddress1 node_ipaddress2:node_ipaddress2)
7214 The username or password to login to the MMB is incorrect.MMB にログインするためのパスワードの設定に誤りがあります。
7215 An error was detected in the MMB IP address or the Node IP address. (mmb_ipaddress1:mmb_ipaddress1 mmb_ipaddress2:mmb_ipaddress2 node_ipaddress1:node_ipaddress1 node_ipaddress2:node_ipaddress2 )MMB の IP アドレス、または、自管理 IP アドレスが変更されています。
7216 This server architecture is invalid.サーバの機種が異なります。
7230    The Host OS information is not set. (nodename:nodename)管理OS情報が登録されていません。(nodename:nodename)
7231    Cannot find the guest domain name.ゲストドメイン名を取得できません。
7232    Cannot find the specified guest domain name. (domainname:domainname)指定されたゲストドメイン名は存在しません。(domainname:domainname)
7233    The username or password to login to the Host OS is incorrect.管理OS にログインするためのユーザ名またはパスワードの設定に誤りがあります。
7234    Connection to the Host OS is refused. (node:nodename detail:code)管理OSへの接続ができません。(node:nodename detail:code)
7235    First SSH connection to the Host OS has not been done yet. (node:nodename detail:code)管理OSに対してのSSHによる事前接続が完了していません。(node:nodename detail:code)
7236     Connection to the Host OS was disconnected. (node:nodename detail:code)管理OSへの接続が切断されました。(node:nodename detail:code)
7237    clvmgsetup has been executed.clvmgsetupコマンドはすでに実行されています。
7240 Connection to the XSCF is refused. (node:nodename ipadress:ipaddress detail:code)XSCFへの接続ができません。(node:nodename ipadress:ipaddress detail:code)
7241 The username or password to login to the XSCF is incorrect.XSCFにログインするためのユーザ名またはパスワードの設定に誤りがあります。
7242 The SNMP agent of XSCF is disabled.XSCFのSNMPエージェントが無効になっています。
7243 The SNMP monitoring agent has been started.SNMP非同期監視機能は既に起動されています。
7244 Cannot find the specified PPAR-ID. (PPAR-ID:PPAR-ID)指定されたPPAR-IDは存在しません。(PPAR-ID:PPAR-ID)
7245 First SSH connection to the XSCF has not been done yet. (node:nodename ipaddress:ipaddress detail:code)XSCFに対してのSSHによる事前接続が完了していません。(node:nodename ipaddress:ipaddress detail:code)
7500 Cluster resource management facility:internal error. (function:function detail:code1-code2)クラスタリソース管理機構で内部異常が発生しました。(function:function detail:code1- code2)
7501 Cluster resource management facility:insufficient memory. (function:function detail:code1)クラスタリソース管理機構でメモリ資源が不足しています。(function:function detail:code1)
7502 Cluster resource management facility:insufficient disk or system resources. (function:function detail:code1)クラスタリソース管理機構でディスク資源またはシステム資源が不足しています。(function:function detail:code1)
7503 The event cannot be notified because of an abnormal communication. (type:type rid:rid detail:code1)通信異常のためイベントを通知できません。(type:type rid:rid detail:code1)
7504 The event notification is stopped because of an abnormal communication. (type:type rid:rid detail:code1)通信異常のためイベントの通知を中止します。(type:type rid:rid detail:code1)
7505 The node (node) is stopped because event cannot be notified by abnormal communication. (type:type rid:rid detail:code1)通信異常でイベントの通知が行えないためノード (node) を停止します。(type:type rid:rid detail:code1)
7506 The node (node) is forcibly stopped because event cannot be notified by abnormal communication. (type:type rid:rid detail:code1)通信異常でイベントの通知が行えないためノード (node) を強制停止します。(type:type rid:rid detail:code1)
7507 Resource activation processing cannot be executed because of an abnormal communication. (resource:resource rid:rid detail:code1)通信異常のためリソースの活性処理が行えません。(resource:resource rid:rid detail:code1)
7508 Resource (resource1 resource ID:rid1, ...) activation processing is stopped because of an abnormal communication. (resource:resource2 rid:rid2 detail:code1)通信異常のためリソース(resource1 resource ID:rid1, ...) の活性処理を中止します。(resource:resource2 rid:rid2 detail:code1)
7509 Resource deactivation processing cannot be executed because of an abnormal communication. (resource:resource rid:rid detail:code1)通信異常のためリソースの非活性処理が行えません。(resource:resource rid:rid detail:code1)
7510 Resource (resource1 resource ID:rid1, ...) deactivation processing is aborted because of an abnormal communication. (resource:resource2 rid:rid2 detail:code1)通信異常のためリソース (resource1 resource ID:rid1, ...) の非活性処理を中止します。(resource:resource2 rid:rid2 detail:code1)
7511 An error occurred by the event processing of the resource controller. (type:type rid:rid pclass:pclass prid:prid detail:code1)リソースコントローラのイベント処理で異常が発生しました。(type:type rid:rid pclass:pclass prid:prid detail:code1)
7512 The event notification is stopped because an error occurred in the resource controller. (type:type rid:rid pclass:pclass prid:prid detail:code1)リソースコントローラで異常が発生したためイベントの通知を中止します。(type:type rid:rid pclass:pclass prid:prid detail:code1)
7513 The node(node) is stopped because an error occurred in the resource controller. (type:type rid:rid pclass:pclass prid:prid detail:code1)リソースコントローラで異常が発生したためノード (node) を停止します。(type:type rid:rid pclass:pclass prid:prid detail:code1)
7514 The node (node) is forcibly stopped because an error occurred in the resource controller. (type:type rid:rid pclass:pclass prid:prid detail:code1)リソースコントローラで異常が発生したためノード (node) を強制停止します。(type:type rid:rid pclass:pclass prid:prid detail:code1)
7515 An error occurred by the resource activation processing (resource:resource rid:rid detail:code1)リソースの活性処理で異常が発生しました。(resource:resource rid:rid detail:code1)
7516 An error occurred by the resource deactivation processing. (resource:resource rid:rid detail:code1)リソースの非活性処理で異常が発生しました。(resource:resource rid:rid detail:code1)
7517 Resource (resource1 resource ID:rid1, ...) activation processing is stopped because an error occurred by the resource activation processing.(resource:resource2 rid:rid2 detail:code1)リソースの活性処理で異常が発生したためリソース(resource1 resource ID rid1, ...) の活性処理を中止します。(resource:resource2 rid:rid2 detail:code1)
7518 Resource (resource1 resource ID:rid1, ...) deactivation processing is aborted because an error occurred by the resource deactivation processing.(resource:resource2 rid:rid2 detail:code1)リソースの非活性処理で異常が発生たためリソース(resource1 resource ID rid1, ...) の非活性処理を中止します。(resource:resource2 rid:rid2 detail:code1)
7519 Cluster resource management facility:error in exit processing. (node:node function:function detail:code1)クラスタリソース管理機構の停止処理で異常が発生しました。(node:node function:function detail:code1)
7520 The specified resource (resource ID:rid) does not exist or be not able to set the dependence relation.指定されたリソース (resource ID:rid) は存在しない、または、依存関係を設定できないリソースです。
7521 The specified resource (class:rclass resource:mame) does not exist or be not able to set the dependence relation.指定されたリソース (class:rclass resource:rname) は存在しない、または、依存関係を設定できないリソースです。
7522 It is necessary to specify the resource which belongs to the same node.同じノードに属するリソースを指定してください。
7535 An error occurred by the resource activation processing.The resource controller does not exist. (resource resource ID:rid)リソースの活性処理で異常が発生しました。リソースコントローラが存在しません。(resource resource ID:rid)
7536 An error occurred by the resource deactivation processing.The resource controller does not exist. (resource resource ID:rid)リソースの非活性処理で異常が発生しました。リソースコントローラが存在しません。(resource resource ID:rid)
7537 Command cannot be executed during resource activation processing.リソースの活性処理中のため実行できません。
7538 Command cannot be executed during resource deactivation processing.リソースの非活性処理中のため実行できません。
7539 Resource activation processing timed out. (code:code detail:detail)リソースの活性処理でタイムアウトが発生しました。(code:code detail:detail)
7540 Resource deactivation processing timed out. (code:code detail:detail)リソースの非活性処理でタイムアウトが発生しました。(code:code detail:detail)
7542 Resource activation processing cannot be executed because node (node) is stopping.ノード (node) が停止中のため、リソースの活性処理が行えません。
7543 Resource deactivation processing cannot be executed because node (node) is stopping.ノード (node) が停止中のため、リソースの非活性処理を行えません。
7545 Resource activation processing failed.リソースの活性処理に失敗しました。
7546 Resource deactivation processing failed.リソースの非活性処理に失敗しました。
7601 A failure occurred in the setting of iRMC asynchronous monitoring agent.(detail: detail)iRMC非同期監視機能の設定に失敗しました。(detail: detail)
7602 The username or password to login to iRMC is incorrect.iRMCにログインするためのユーザ/パスワードの設定に誤りがあります。
7603 The authority of user to login to iRMC is incorrect.iRMCにログインするためのユーザの権限に誤りがあります。
7604 An error has been detected in the transmission route to iRMC.(node:nodename irmc_ipaddress:irmc_ipaddress node_ipaddress:node_ipaddress)iRMCの通信経路の異常を検出しました。(node:nodename irmc_ipaddress:irmc_ipaddress node_ipaddress:node_ipaddress)
7605 An error has been detected in iRMC.(node:nodename irmc_ipaddress:irmc_ipaddress node_ipaddress:node_ipaddress detail:detail)iRMCの異常を検出しました。(node:nodename irmc_ipaddress:irmc_ipaddress node_ipaddress:node_ipaddress detail:detail)
7606 The snmptrapd is not running. (detail:detail)snmptrapdが動作していません。(detail:detail)
7607 The IP address version of the admin LAN of shutdown facility does not match that of iRMC.シャットダウン機構の管理LANとiRMCのIPアドレス形式が一致しません。
7608 The IP address version of the admin LAN of shutdown facility does not match that of MMB.シャットダウン機構の管理LANとMMBのIPアドレス形式が一致しません。
7609 The IPMI service is not running.IPMIサービスが動作していません。
7610 The authority of user to login to MMB is incorrect.MMBにログインするためのユーザの権限に誤りがあります。
7611 The supported number of the cluster nodes is exceeded. (Max node:node)サポートノード数を超えています。 (Max node:node)
CF: clustername: nodename is Down. (#0000 nodenum)
cf: elmlog !rebuild complete in 1 lbolt.
cf: elmlog !rebuild starting~
CF: Giving UP Mastering (Cluster already Running).
CF: Giving UP Mastering (some other Node has Higher ID).
CF: Node nodename Joined Cluster clustername. (#0000 nodenum)
CF: Node nodename Left Cluster clustername.(#0000 nodenum)
CF: Questionable node <nodeA> detected by node <nodeB>
CF: Questionable node message received from node <nodeA>: <nodeB> detected this node as questionable
CF: Starting Services.
CF: Stopping Services.
CF: (TRACE): Cfset: CLUSTER_NODEDOWN_HTBTRPLY: DFLT. (#0000 n1)
CF: (TRACE): Cfset: CLUSTER_NODEDOWN_HTBTRPLY: %s. (#0000 n1)
CF: (TRACE): Cfset: CLUSTER_TIMEOUT: %s. (#0000 n1)
CF: (TRACE): Cfset: CLUSTER_FORCE_PANIC_TIMEOUT: %s. (#0000 n1)
CF: (TRACE): Cfset: CLUSTER_IP_CTRL_TOS: %s. (#0000 n1)
CF: (TRACE): Cfset: CLUSTER_IP_DATA_TOS: %s. (#0000 n1)
CF: (TRACE): Cfset: CLUSTER_IP_TTL: %s. (#0000 n1)
CF: (TRACE): Cfset: CLUSTER_TIMEOUT: DFLT. (#0000 n1)
CF: (TRACE): Cfset: CLUSTER_FORCE_PANIC_TIMEOUT: DFLT. (#0000 n1)
CF: (TRACE): Cfset: CLUSTER_IP_TTL: DFLT. (#0000 n1)
CF: (TRACE): Cfset: CLUSTER_IP_CTRL_TOS: DFLT. (#0000 n1)
CF: (TRACE): Cfset: CLUSTER_IP_DATA_TOS: DFLT. (#0000 n1)
CF: (TRACE): CFSF: Device close.
CF: (TRACE): CFSF failure detected: no SF open: passed to ENS: nodename. (#0000 N)
CF: (TRACE): CFSF failure detected: queued for SF: nodename. (#0000 N)
CF: (TRACE): CFSF failure handoff to SF: %s. (#0000 n1)
CF: (TRACE): CFSF: interrupted wait. (#xxxx)
CF: (TRACE): CFSF leftcluster broadcast: %s. (#0000 n1)
CF: (TRACE): CFSF leftcluster received: removing new: nodename. (#0000 N)
CF: (TRACE): CFSF leftcluster received: removing pending: nodename. (#0000 N)
CF: (TRACE): CFSF leftdown combo broadcast: %s. (#0000 n1)
CF: (TRACE): CFSF nodedown broadcast: %s. (#0000 n1)
CF: (TRACE): CFSF node leaving cluster failure passed to ENS: nodename. (#0000 N)
CF: (TRACE): CFSF: Pending failure for node down broadcast. (#xxxx n1)
CF: (TRACE): CFSF: Successful open.
CF (TRACE): EnsEV: Shutdown
CF (TRACE): EnsND: Shutdown
CF (TRACE): Icf: Route UP: node src dest (#0000 nodenum route_src route_dst)
CF: (TRACE): Join Client: Setting cluster initialization timestamp.
CF: (TRACE): Join Master: Setting cluster initialization timestamp.
CF (TRACE): JoinServer: Stop
CF (TRACE): JoinServer: Startup
CF (TRACE): JoinServer: ShutDown
CF: (TRACE): Icf: Link+Route UP: node src dest. (#0000 n1 n2 n3)
CF (TRACE): Load: Complete
CF: (TRACE): nodename: detected as a questionable node.
CF: (TRACE): nodename: heartbeat reply received: Stopping requested heartbeat.
CF: (TRACE): Starting requested heartbeat for node: nodename.
CF: (TRACE): Starting voluntary heartbeat for node: nodename.
cfregd normal termination
cip: configured cipN as Addr.
Deleting server nid %d from domain %d
duplicate remote file copy request
!rebuild cluster: reason: %s
subsequent remote file copy request with stale handle
block %x, obj %x
cf: elmlog ELM:001: cluster size n
CF: (TRACE): CFSF: Duplicate open attempt. (#xxxx)
CF: (TRACE): CFSF: Invalid event for broadcast. (#xxxx n1)
CF: (TRACE): CFSF: Invalid nodeid for broadcast. (#xxxx n1)
CF: (TRACE): Link is DOWN for device: <devicename>. (#0000 N)
CF: (TRACE): Link is UP for device: <devicename>. (#0000 N)
close failed: bad handle %x type=0x%x val=0x%x
config nack failed: NOSPACE, state %s stage %s
config nreq %s to %d: NOBIGBUFF
Deleting mlock during remaster: %x %x %x
dir nreq failed: NOSPACE: dnode: %d
ELM:0003: rebuild: state: %d stage: %d  NOSPACE
ELM:0004: %d DISAGREE (nack) from %d  lm_visible: 0x%x 0x%x 0x%x 0%x
ELM:0006: %d DISAGREE (nreq) lm_visible:  0x%x 0x%x 0x%x 0%x nreq: 0x%x 0x%x 0x%x 0x%x from %d
ELM: destroying inactive resource %x
ELM: invalid query paramater
elm_mrpc_configure failed, reason=#%lx
elm_mrpc_register failed, reason=#%lx
ELM: Multiple rdomains for one resource %lx, %lx, %lx
ELM: query 0x%x from %d failed, reason: 0x%x
ELM: Running out of recovery domains!
ELM: Unknown  query 0x%x from nid %d
fixup failed: bad handle %x
freeze failed: bad resource handle 0x%x\ntype=0x%x status=0x%x mlockp=0x%x
handle failed: bad handle %x
instance of cfregd already running
Invalid ELM domain for server: %d, ignored.
lfreeze failed: NOSPACE for msg buffers
lfreeze from %d failed: lock closed %x
lk_srv_cancel_nack: stale cv_cid %d for lock %x
lk_srv_notify_nack: Stale procp for lock %x
lk_swap_config_nreg: unknown convert stage: %d
lock destroy failed: NOSPACE for dir close
look_new: kdomain %x has rdomainp %x
mopen/convert failed: bad handle %x
mopen failed: bad handle %x
mopen failed:  %x %x: NOSPACE for msg buffers
mopen master lock open failed : MAXLOCK
plock destroy failed: bad lock %x %x
plock destroy failed: NOSPACE for msg buffers
rdomain failed: NOSPACE for msg buffers
rdomain procp: %x exit, msgp: %x
rdomainp %x != NULL for kernel domains
Rdomains:  %lx and %lx for one single resource %lx
rebuild failed: NOSPACE for config_nreq, nid=%d
rebuild_pconvert_start: NOSPACE, state %s
Recovery was not registered yet: rdomainp %x
refopened %x %x convert failed: NOSPACE
refopen failed: MAXLOCK
resp %x: Bad resource type or rdomain
rfreeze failed: bad directory handle %x
rfreeze failed: NOSPACE for msg buffers
rm_sequence failed: NOSPACE for msg buffers
Running out of recovery domains
send_cancel_cack_queue: invalid procp %x for lock %x
send_convert_cack: bad procp: %x for lock %x
send_queue: invalid proc: %x for lock %x
unfreeze failed: bad res handle %x from %d
val nreq gq: lock %x closed
value failed: bad handle %x from %d
value failed: NOSPACE for msg buffers
CF: A node reconfiguration event has occured on cluster node: %s.
CF: carp_event: bad nodeid (#0000 nodenum)
CF: carp_event: Warning: CARP: Node %d trying to be our CIP address %d.%d.%d.%d. (#0000 n1 n2 n3 n4)
CF: cf_cleanup_module: defer work element still in use
CF: cf_cleanup_module: input work element still in use
CF: cflog: Check the keywords in the format specified. (#xxxx n1)
CF: cfng Error: cfng_unpack_def: nsi_getarrcnt failed. (#xxxx)
CF: cfng Error: cfng_unpack_def: nsi_setarrcnt failed. (#xxxx)
CF: cfng Error: cfng_unpack_def: nsi_unpack failed. (#xxxx)
CF: cfng Error: cfng_unpack_grpname: failed. (#xxxx)
CF: cfng Error: ng_create: cfng_unpack_def failed. (#xxxx)
CF: cfng Error: ng_create: group already exists. (#xxxx)
CF: cfng Error: ng_delete: cfng_unpack_def failed. (#xxxx)
CF: cfng Error: ng_delete: group does not exist. (#xxxx)
CF: cfng Error: ng_eventd: nsi_setarrcnt failed. (#xxxx)
CF: cfng Error: ng_op_done: cfng_unpack_def failed. (#xxxx)
CF: cfng Error: ng_replace: cfng_unpack_def failed. (#xxxx)
CF: cfng Error: ng_replace: failed. (#xxxx)
CF: cfng: ng_delete_all Error: %s. (#xxxx)
CF: cfng: ng_eventd: nsi_pack failed. (#xxxx)
CF: cfng: ng_eventd: nsi_size failed. (#xxxx)
CF: cfng: ng_node_change: failed to get name from node id. (#xxxx)
CF: cfset Error: linux_ioctl: not enough memory to return values.
CF: cfset Error: linux_ioctl: OSD_copy_from_user fails.
CF: cfset Error: linux_ioctl: OSD_copy_to_user fails.
CF: Cfset: Invalid value for CLUSTER_NODEDOWN_HTBTRPLY, setting to ON: %s. (#0000 n1)
CF: Cfset: Invalid value for CLUSTER_TIMEOUT, setting to 1: %s. (#0000 n1)
CF: cftool -k may be required for node: %s.
CF: CF-force-panic timeout: requires CFSF is not running.
CF: cip: Failed to register ens EVENT_CIP
CF: cip: Failed to register ens EVENT_NODE_LEFTCLUSTER
CF: cip: Failed to register icf channel ICF_SVC_CIP_CTL
CF: cip: message SYNC_CIP_VERSION is too short
CF: Damaged raw icf packet detected in JoinGetMessage: dropped.
CF: device %s ineligible for CF use. Now "enslaved" to a bonding driver.
CF: device %s: link is %s
CF: ELM init: : %s.
cf: eventlog   CF: Problem detected on cluster interconnect device-name to node nodename: missing heartbeat replies.
CF: Icf: Xmit timeout: flush svcq.
CF: Initialization failed. Error: No network interface devices found.
CF: Initialization failed. Error: Unsupported network interface devices found.
CF: Join Aborted: nodename1 or nodename2 must be removed from cluster.
CF: Join Aborted: Duplicate nodename nodename exists in cluster clustername.
CF: Join client nodename timed out. (#0000 nodenum)
CF: Join Error: Attempt to send message on unconfigured device. (#0000 n1)
CF: Join Error: Invalid configuration: another node has a device with a duplicate address.
CF: Join Error: Same node ID for nodename1 and nodename2 in same cluster. (#0000 n1)
CF: Join Read Config Error: internal configuration error.
CF: Join Server Reply Error: Attempt to send message on unconfigured device. (#0000 n1)
CF: MTU mismatch found on cluster interconnect devname to node nodename.
CF: No join servers found.
CF: nodename: busy serving another client: retrying.
CF: nodename Error: local node has no route to node: join aborted.
CF: nodename Error: no echo response from node: join aborted.
CF: OSD_allocate_memory: Warning, vmalloc failure...retrying (%d %d)
CF: OSD_defer_work_element_alloc: no free work_elements
CF: OSD_defer_work_element_alloc: no more work_elements, never should be here
CF: OSD_iwork_element_alloc: no free work_elements
CF: OSD_iwork_element_alloc: no more work_elements, never should be here
CF: Problem detected on cluster interconnect devicename to node nodename : ICF route marked down.
CF: Received out of sequence packets from join client: nodename
CF: route ignored on local device: %s. (#0000 n1 n2 n3)
CF: Route recovery on devname to node nodename. (#xxxx n1 n2 n3 n4)
CF: servername: ~
CF: socket refcnt less than initialized value: ignored. (#0000 n1)
CF: symsrv loadsyms Error: failure to upload symsrv semaphore function table. (#0000 n1)
CF: this node attempting to declare itself failed: ignored.
CF: this node attempting to declare itself questionable: ignored.
CF: unexpected icfip_sock_data_ready call: ignored.
CF: unexpected icfip_sock_destruct call: ignored. (#0000 n1)
CF: unexpected icfip_sock_error_report call: ignored.
CF: unexpected icfip_sock_state_change call: ignored.
CF: unexpected icfip_sock_write_space call: ignored.
CF: Unknown log event: strings: %s, %s
CF: User level event memory overflow: Event dropped (#0000 eventid)
CF: WARNING: a new symsrv driver will be used upon system reboot.
CF: WARNING: an old version of the symsrv driver is being used.
CF: WARNING: cfconfig command execution will not be synchronized.
CF: %s: busy: a node is leaving cluster: retrying.
CF: %s: busy: node down processing in progress: retrying.
CF: %s: device not found.
CF: %s: missed echo responses: retrying.
CF: %s: Node down in progress. Joins retry until node down complete.
CF: %s: Node leaving cluster. Joins retry until node leaves cluster.
CF: %s: Node trying to join cluster but not declared DOWN.
cf_register_ioctls: %d errors
cf_unregister_ioctls: %d errors
cip: Add device %s does not support multicast (0x%x)
cip: Device %s does not support multicast (0x%x)
cip: failed to allocate memory for device.
cip: Msg size %d exceeds mtu %d
cip: NULL in_dev for unit %d
cip: Only ETH_P_IP type packet is supported.
cip: receiving multicast address requests of invalid size (%d, %d)
cip: Strange socket buffer. Insufficient room.
cluster interconnect is not yet configured.
create_dev_entry for %s: %s got 0x%08x
Failed to Register the CF driver (rc=0x%08x)
linux_setconf: no configured netinfo devices found
OSD_lock_user_memory: NOT IMPLEMENTED
OSD_NET_one_device_detach: invalid device index
OSD_SYS_create_device failed to create cip.
OSD_unlock_user_memory: NOT IMPLEMENTED
size of base data structures do not match!!! cannot load!!! exiting...
Warning!!! ifreq size is different
Warning!!! in_device size is different
Warning!!! in_ifaddr size is different
Warning!!! net_device size is different
Warning!!! packet_type size is different
Warning!!! rwlock_t size is different
Warning!!! semaphore size is different
Warning!!! sk_buff size is different
Warning!!! spinlock_t size is different
Warning!!! timer_list size is different
Warning!!! wait_queue_head_t size is different
cannot get data file configuration : '#xxxx: %s : %s
cfconfig: dl_bind: DL_SYSERR errorcfconfig: get_net_dev: dl_bind failed: /dev/<ネットワークインタフェース名>
cfrecon: dl_bind: DL_SYSERR errorcfrecon: get_net_dev: dl_bind failed: /dev/<ネットワークインタフェース名>
cfset: cfset_getconf_kernel: malloc failed : '#xxxx: %s : %s
cfset: /etc/default/cluster.config is not loaded successfully : '#xxxx: %s : %s
cfset_get_conf: malloc failed
CF: carp_broadcast_version: Failed to announce version cip_version
CF: ens_nicf_input Error:unknown msg type received. (#0000 msgtype)
CF: Icf Error: (service err_type route_src route_dst). (#0000 service err-type route_src route_dst )
CF: Join Error: Invalid configuration: asymmetric cluster.
CF: Join Error: Invalid configuration: multiple devs on same LAN.
CF: Join postponed: received packets out of sequence from servername.
CF: Join postponed, server servername is busy.
CF: Join timed out, server servername ~
CF: Local node is missing a route from node:nodename CF: missing route on local device:devicename
CF: Local Node nodename Created Cluster clustername. (#0000 nodenum)
CF: Local Node nodename Left Cluster clustername.
cf:mipc  : ib_available gethostbyname: No such file or directory
CF: This kernel version is not supported.
CF (TRACE): cip: Announcing version cip_version
check_matching_key: malloc failed
cluster file last updated by %s on node %s at %.24s
cluster is using empty data file
cms_ack_event failed : '#xxxx: %s : %s
cms_post_event failed: check commit event : '#xxxx: %s : %s
commit event received without active transaction
compare error: key read "%s", expected key "%s"
compare error: key "%s" entry size = %d, expected size %d
configuration not set for remote execution request: src node %d: cmd = %s
configuration not set for remote file copy request
control event notification failed
corrupt file entry: key "%s" : '#xxxx: %s : %s
corrupt sync data: key "%s": '#xxxx: %s : %s
data compare error: key "%s"
data file closed during sync
data file closed during transaction
data/temp file closed during update
duplicate name %s: line %d: ignored
empty sync reply without EOF
ENS events dropped
error setting data file gen num : '#xxxx: %s : %s
expected EOF
failed to associate user event (index = %d) : '#xxxx: %s : %s
failed to chmod remote file copy tmp file : '#xxxx: %s : %s
failed to chown remote file copy tmp file : '#xxxx: %s : %s
failed to close remote file copy tmp file : '#xxxx: %s : %s
failed to end daemon transaction : '#xxxx: %s : %s
failed to find signaled event (index = %d)
failed to get daemon request : '#xxxx: %s : %s
failed to get ENS event (index = %d) : '#xxxx: %s : %s
failed to get node details : '#xxxx: %s : %s
failed to get node id : '#xxxx: %s : %s
failed to get node name : '#xxxx: %s : %s
failed to handle left cluster : '#xxxx: %s : %s
failed to init user event (index = %d) : '#xxxx: %s : %s
failed to open remote file copy tmp file : '#xxxx: %s : %s
failed to register for user event (index = %d) : '#xxxx: %s : %s
failed to set daemon state to ready : '#xxxx: %s : %s
failed to set daemon state to sync : '#xxxx: %s : %s
failed to start daemon transaction : '#xxxx: %s : %s
failed to stat remote file copy dst : '#xxxx: %s : %s
failed to write to remote file copy tmp file : '#xxxx: %s : %s
failure to ack commit event : '#xxxx: %s : %s
failure to open cfrs device : '#xxxx: %s : %s
failure to rename dstpath to %s : '#xxxx: %s : %s
failure to set daemon info : '#xxxx: %s : %s
failure to spawn: %s : '#xxxx: %s : %s
first sync request return pkg size = 0
fork failure : '#xxxx: %s : %s
fread temp output file error : '#xxxx: %s : %s
fseek temp output file error : '#xxxx: %s : %s
handle_notification(): id %ld get a null pointer
handle_notification(): wrong context owner %ld
invalid transaction timeout node id NodeID
invalid usage of %s: %s
line %d: name %s missing value
line %d: name %s value too long
line %d: name too long (%d max)
line %d: premature EOF: last value ignored
line %d: value without a name: ignored
lk_wait(): Error wrong connection
lk_wait(): Error wrong Owner
lk_wait(): wrong context owner %ld
local file last updated by %s on node %s at %.24s
memory allocation failed (header: size = %d)
memory allocation failed (init file copy req: size = %d)
memory allocation failed (old data file: size = %d)
memory allocation failed (remote file copy queue entry: size = %d)
memory allocation failed (req return: size = %d)
memory allocation failed (sub file copy req: size = %d)
memory allocation failed (sync entry key: size = %d)
memory allocation failed (sync reply: size = %d)
memory allocation failed (sync req: size = %d)
memory allocation failed (sync request: size = %d)
memory allocation failed (update infos: size = %d)
missing entry with key "%s"
missing entry with key "%s" for delete
missing entry with key "%s" for modify
name %s: line %d: too many configuration entries (%d max) remaining entries ignored
next sync request return pkg size = 0
nodegroup load failed due to malloc failure
nodegroup load failed due to nsi failure
nodegroup load failed : '#xxxx: %s : %s
nsi_getarrcnt error: init file copy data
nsi_getarrcnt error: request key
nsi_getarrcnt error: sub file copy data
nsi_getarrcnt error: sync reply data
nsi_getarrcnt error: update event data
nsi_pack error: check commit
nsi_pack error: control event
nsi_pack error: first sync request
nsi_pack error: next sync request
nsi_pack error: remote file copy response
nsi_pack error: sync reply
nsi_pack_buffer error: header
nsi_pack_buffer response error
nsi_pack_size error: header
nsi_pack_size error: sync reply
nsi_setarrcnt error: init file copy data
nsi_setarrcnt error: request key
nsi_setarrcnt error: sub file copy data
nsi_setarrcnt error: sync reply data
nsi_setarrcnt error: sync request key
nsi_setarrcnt error: update event data
nsi_unpack error: commit event
nsi_unpack error: daemon down event
nsi_unpack error: first sync request return
nsi_unpack error: init file copy request
nsi_unpack error: next sync request return
nsi_unpack error: sub file copy request
nsi_unpack error: sync reply
nsi_unpack error: sync request
nsi_unpack error: update event
open %s : '#xxxx: %s : %s
OSDU_open_symsrv: failed to open /dev/symsrv: #%04x: %s: %s
OSDU_select_nic: %s not a selectable device
OSDU_start: CF configured invalid IP address %s
OSDU_start: CF configured IP device %s not available
OSDU_start: CF configured too many IP addresses %s
OSDU_start: Could not get configuration: The fast start option requires a valid configuration file.
OSDU_start: failed to load the driver
OSDU_start: failed to load the symsrv driver
OSDU_start: failed to open /dev/linux (%s)
OSDU_start: failure to determine boot time
OSDU_start: LINUX_IOCTL_SETCONF ioctl failed
OSDU_stop: enable unload failed
rcqconfig failed to configure qsm due to cfreg failure.
rcqconfig failed to configure qsm due to cfreg_put failure.
rcqconfig failed to configure qsm due to ens post eventy failure.
read entry with key "%s"
received remote service request during sync phase
received sync request during sync phase
received transaction timeout request during sync phase
received wrong ENS event (received 0x%x, expected 0x%x)
remote file copy destination not regular file
response buffer size too large: %d
send_request: bogus return value %d
starting transaction without empty temp file
starting transaction without open data file
sync compare data overflow: size %d expected 0
sync data overflow: %d expected 0
sync data too small for compare: %d
sync reply data size corrupt: entsize = %d, datasize = %d
sync reply data size corrupt: size = %d, datasize = %d
sync reply data too small: %d
temp file not open
transaction handle validation error : '#xxxx: %s : %s
uev_wait failed : '#xxxx: %s : %s
unknown update type %d
wait failure : '#xxxx: %s : %s
%s: cannot create
%s: close failure
%s: failed to open for read
%s: failure to remove : '#xxxx: %s : %s
%s: failure to rename to %s
%s: failure to rename to dstpath : '#xxxx: %s : %s
%s: fseek 0 failed
%s: fwrite entry returned %d, expected %d
%s: fwrite entsize returned %d, expected %d
%s: fwrite header returned %d, expected %d
%s: fwrite sync data returned %d, expected %d
%s: fwrite update info returned %d, expected %d
%s: not open for write
%s: read error : '#xxxx: %s : %s
%s: synchronization failed
/etc/default/cluster.config is not loaded successfully : '#xxxx: %s : %s
Advertisement server successfully started
After the delay of value seconds, nodename would kill:
All cluster hosts have reported their weight
All hosts in the shutdown list are DOWN. Delay timer is terminated
Already killed node : nodename, ignoring InvokeSA()
A reconfig request came in during a shutdown cycle, this request was ignored
A reconfig request is being processed. This request was ignored
A request to clean rcsd log files came in during a shutdown cycle, this request was ignored
A Shutdown request for host nodename is already in progress. Merging this request with the original request
A shutdown request has come in during a test cycle, test of Shutdown Agent PID pid will be terminated
Broadcasting KRlist...
Cleaning advertisements from the pipe name
cleaning pending InvokeSA()
Cleaning RCSD log files
CLI request to Shutdown host nodename
could not get  NON machine weights from RMS
disablesb.cfg does not exist, errno errno
Eliminating host nodename has been taken care of
Failed to break into subclusters
Failed to calculate subcluster weights
Failed to cancel thread, thread of string string of host nodename
Failed to prune the KR list
Failed to VerifyMaster
Finished Resolving Split.
Finished wait for Delay Timer.starting to resolve split
For string: MyCH = 0xvalue, MySC = 0xvalue
Forced to re-open CLI pipe due to a missing pipe name
Forced to re-open CLI pipe due to an invalid pipe name
Forced to re-open CLI pipe due to failed stat pipe name errno
Fork Shutdown Agent(PID pid) to action host nodename
Gathering CF Status
getservbyname returned portnumber as the port for sfadv server
host nodename has already been killed. Ignoring this request
Host nodename has been put in the shutdown list
host nodename is the value highest weight in the cluster
Host nodename, MA Monitoring Agent, MAHostGetState():string
Host nodename, MA Shutdown Agent, MAHostInitState() returned value
Host nodename (reported-weight:value) wants to kill nodename - master nodename
icf_ping returned value(string)
In string : Select returned value, errno errno
InvokeSA( nodename, action, number )
Kill requests:
Kill Requests are:
KR from nodename to kill nodename had master nodename, now has master nodename
MA Monitoring Agent reported host nodename leftcluster, state string
No kill requests from my subcluster
node-factor set to default value: value
node-factor updated to : value
nodename was leaving cluster
pclose failed for command.errno = errno
Pending InvokeSA():
PID pid has indicated a {successful | failed} host action
PID pid has indicated a successful host shutdown
Processing event for host nodename
RCSD already running
RCSD controlled daemon exit completed
RCSD has detected some data on the pipe RCSDNetPipe.
RCSD log files cleaned successfully
RCSD started
Received the string command from the CLI PID pid
Restarting advertisement server thread after a reconfig
RMS is NOT running on this system
RMS is running on this system
SA Shutdown Agent  to init host nodename succeeded
SA Shutdown Agent to shutdown host nodename succeeded
SA Shutdown Agent to unInit host nodename succeeded
Sending a Dummy KRlist - No Kill req only weight
SF will be the Split Brain Manager on this system
Shutdown request had come in. Reconfig is ignoredExit request had come in. Reconfig is ignored
Skipping weight of nodename, it is DOWN
Skipping weight of nodename, it has never joined the cluster
sleep n seconds before invoking Shutdown Agent to kill nodename
SMAWRrms is installed on this system
SMAWRrms is NOT installed on this system
Split Brain processing completed. Nothing to do on local node
Split Brain processing is in progress, saving InvokeSA into the Buffer
Starting the Advertisement server on host( IP address ), port:number
Starting the RCSD Daemon
Sub-cluster master nodename, Sub-cluster weight value (value%) count value
Sub-cluster statistics:
The RCSD on host nodename is running in CF mode
The SF-CF has received event
The SF-CF has successfully declared host nodename event
Throwing away the KRlist
Total Cluster weight is :value
Total Cluster Weight: value, Percentage of Cluster weight missing: value%
Total potential weight is value (value application + value machine)
Total user application weight for all user applications online on local host is value
Total user application weight for the total cluster is value
unable to verify admIP: IP address
Unknown command from sdtool, command value
Waiting for localCHost->CH_delay: n
weight-factor set to default value: value
weight-factor updated to : value
Advertisement to host:nodename on admIP:string failed
All cluster hosts have NOT reported their weight
A request to exit rcsd came in during a shutdown cycle, this request was ignored
Cannot open CIP configuration file : file
checkAdmInterface : can't open datagram socket. errno=errno
cleanUpServerThread: Failed to cancel advertisement server thread
command timed out after 0.1 sec
Failed in plock(). errno errno
Failed in priocntl(option). errno errno. RCSD is not a real-time process
Failed to cancel thread of string
Failed to do fcntl(serversockfd, FD_CLOEXEC) errno errno
Failed to do string, reason (value)string
Failed to get nodeid for host nodename. reason (value)string
Failed to open CLI response pipe for PID pid, errno errno
Failed to open lock file
Failed to perform delay
Failed to read the received advertisement from the rcsd net pipe
host information for string not foundgethostbyname returned Invalid address for string
host nodename has no input in 2 seconds. Ignore it
Host nodename, MA Monitoring Agent, MAHostGetState() failed
Local host is not defined in rcsd.cfg
makeXDRfromAdv: can't convert NULL ad to XDR
No/Invalid admin LAN specified. Advertisement server will not be started
open failed on rcsd net pipe name, errno errno
open failed on RCSD response pipe name, errno errno
PID pid exitted due to receiving signal number number
PID pid exitted with a non-zero value of value
PID pid was stopped with signal number number
Pthread failed: pthread_XXXX : errcode num string
WARNING : Pid process id is not able to be terminated. The SA Shutdown Agent is now disabled from host  nodename
popen failed for command.errno = errno
SA Shutdown Agent to shutdown host nodename failed
SA string does not exist
Sending type to host nodename failed, ackId=number
Shutdown Agent Shutdown Agent timeout for host <nodename> is less than 20 seconds
The RCSD on host nodename is NOT running in CF mode
The SA Shutdown Agent to action host nodename has exceeded its configured timeout, pid process id will be terminated
The SF-CF failed to declare host nodename(nodeid number) string, reason (value)string
Unknown host nodename
WARNING: No context allocation. MA Monitoring Agent for host nodename is neglected
write failed on rcsd net pipe name, errno errno
Advertisement Client : can't open datagram socket. errno = errno
Advertisement Client : sendto error on socket errno errno
Calculation of sum of machine weights failed
checkAdmInterface : can't bind local address. errno=errnoAdvertisement server: can't bind local address, errno errno
ERROR:Admin LAN and CIP on the same interface
Error in option specification. (option:option)
Failed to convert AdvData to XDR: string
Failed to convert XDR to advData: string
FATAL ERROR: Rcsd fails to continue. Exit now.
Host list(hlist) Empty
The shutdown attempt for host <hostname> could not complete - all SA's failed
Advertisement server: Data received will be discarded due to receive error on socket. errno = errno
Agent Shutdown Agent uninitialization for host nodename failed
cannot determine the port on which the advertisement server should be started
Could not correctly read the rcsd.cfg file.
Decryption of SecretAccesskey failed.
/etc/sysconfig/libvirt-guests is not configured on Hypervisor of host nodename. rcsd died abnormally.
Failed to create a signal handler for SIGCHLDFailed to create a signal handler for SIGUSR1
Failed to get kernel parameter kernel.panic.
Failed to get kernel parameter kernel.sysrq.
Failed to get kernel parameter kernel.unknown_nmi_panic.
Failed to unlink/create/open CLI Pipe
Failed to open CFSF device, reason (value)string
Fail to post LEFTCLUSTER event:string
FATAL: rcsd died too frequently.It will not be started by rcsd_monitor.
fopen of /etc/opt/SMAW/SMAWsf/rcsd.cfg failed, errno errno
Forced to re open rcsd net pipe due to an invalid pipe name
Forced to re-open rcsd net pipe due to a missing pipe name
Forced to re-open rcsd net pipe due to failed stat pipe name errno: errno
function of file failed, errno errno
h_cfsf_get_leftcluster() failed. reason: (value)string
HostList empty
Host <nodename > ICF communication failure detected
Host nodename MA_exec: string failed, errno errno
Illegal /etc/kdump.conf file. default option is not found.
Illegal /etc/kdump.conf file. default option setting is incorrect.
Illegal /etc/kdump.conf file. kdump_post option is not found.
Illegal /etc/kdump.conf file. kdump_post option setting is incorrect.
Illegal /etc/opt/SMAW/SMAWsf/SA_vmawsReset.cfg file. CFName=nodename is not found.Illegal /etc/opt/SMAW/SMAWsf/SA_vmawsAsyncReset.cfg file. CFName=nodename is not found.
Illegal /etc/opt/SMAW/SMAWsf/SA_vmawsReset.cfg file. itemname is not found.Illegal /etc/opt/SMAW/SMAWsf/SA_vmawsAsyncReset.cfg file. itemname is not found.
Illegal /etc/opt/SMAW/SMAWsf/SA_vmawsReset.cfg file. The invalid data is included.Illegal /etc/opt/SMAW/SMAWsf/SA_vmawsAsyncReset.cfg file. The invalid data is included.
Illegal /etc/opt/SMAW/SMAWsf/SA_vmazureReset.cfg file. CFName=nodename is not found.
Illegal /etc/opt/SMAW/SMAWsf/SA_vmazureReset.cfg file. itemname is not found.
Illegal /etc/opt/SMAW/SMAWsf/SA_vmazureReset.cfg file. The invalid data is included.
Illegal /etc/opt/SMAW/SMAWsf/SA_vmnifclAsyncReset.cfg file. CFName=nodename is not found.
Illegal /etc/opt/SMAW/SMAWsf/SA_vmnifclAsyncReset.cfg file. itemname is not found.
Illegal /etc/opt/SMAW/SMAWsf/SA_vmnifclAsyncReset.cfg file. The invalid data is included.
Illegal /etc/opt/SMAW/SMAWsf/SA_vmosr.cfg file. CFName=nodename is not found.
Illegal /etc/opt/SMAW/SMAWsf/SA_vmosr.cfg file. itemname is not found.
Illegal /opt/SMAW/SMAWRrms/etc/os_endpoint.cfg file. "itemname" is not found.
Illegal /opt/SMAW/SMAWRrms/etc/os_endpoint.cfg file. The invalid character string of "/vX.X" is included in "itemname".
Illegal configfile file. item is not found
Illegal kernel parameter. kernel.panic setting is incorrect.
Illegal kernel parameter. kernel.sysrq setting is incorrect.
Illegal kernel parameter. kernel.unknown_nmi_panic setting is incorrect.
Malloc failed during function
Node id number ICF communication failure detected
rcsd died abnormally. Restart it.
SA_lkcd: FJSVossn is not installed.
SA SA_blade to test host nodename failedSA SA_ipmi to test host nodename failedSA SA_lkcd to test host nodename failed
SA SA_icmp to test host nodename failed
SA SA_ilomp.so to test host nodename failedSA SA_ilomr.so to test host nodename failedSA SA_rccu.so to test host nodename failedSA SA_xscfp.so to test host nodename failedSA SA_xscfr.so to test host nodename failed
SA SA_irmcf.so to test host nodename failed
SA SA_irmcp.so to test host nodename failedSA SA_irmcr.so to test host nodename failed
SA SA_kzchkhost to test host nodename failed
SA SA_kzonep to test host nodename failedSA SA_kzoner to test host nodename failed
SA SA_libvirtgp to test host nodename failedSA SA_libvirtgr to test host nodename failed
SA SA_mmbp.so to test host nodename failedSA SA_mmbr.so to test host nodename failed
SA SA_pprcip.so to test host nodename failedSA SA_pprcir.so to test host nodename failed
SA SA_rpdu to test host nodename failed
SA SA_sunF to test host nodename failed
SA SA_vmawsReset to test host nodename failedSA SA_vmawsAsyncReset to test host nodename failed
SA SA_vmazureReset to test host nodename failed
SA SA_vmchkhost to test host nodename failedSA SA_vmgp to test host nodename failedSA SA_vmSPgp to test host nodename failedSA SA_vmSPgr to test host nodename failed
SA SA_vmk5r to test host nodename failed
SA SA_vmnifclAsyncReset to test host nodename failed.
SA SA_vmosr to test host nodename failed
SA SA_vwvmr to test host nodename failed
SA SA_xscfsnmpg0p.so to test host nodename failedSA SA_xscfsnmpg1p.so to test host nodename failedSA SA_xscfsnmpg0r.so to test host nodename failedSA SA_xscfsnmpg1r.so to test host nodename failedSA SA_xscfsnmp0r.so to test host nodename failedSA SA_xscfsnmp1r.so to test host nodename failed
SA Shutdown Agent to init host nodename failed
SA Shutdown Agent to test host nodename failed
SA Shutdown Agent to unInit host nodename failed
select of CLI Pipe & RCSDNetPipe failed, errno errno
string in file file around line number
The attempted shutdown of cluster host nodename has failed
The authentication request failed.
The AWS CLI execution failed.
The Azure CLI execution failed.
The configuration file /etc/opt/SMAW/SMAWsf/SA_vmawsReset.cfg does not exist.The configuration file /etc/opt/SMAW/SMAWsf/SA_vmawsAsyncReset.cfg does not exist.
The configuration file /etc/opt/SMAW/SMAWsf/SA_vmazureReset.cfg does not exist.
The configuration file /etc/opt/SMAW/SMAWsf/SA_vmnifclAsyncReset.cfg does not exist.
The configuration file /etc/opt/SMAW/SMAWsf/SA_vmosr.cfg does not exist.
The configuration file /opt/SMAW/SMAWRrms/etc/os_endpoint.cfg does not exist.
The configuration file configfile does not exist
The information acquisition request of the virtual machine instance-id failed.
The information acquisition request of the virtual machine instancename failed.
The information acquisition request of the virtual machine resource-id failed.
The information acquisition request of the virtual machine ServerName failed.
The SF-CF event processing failed string, status value
The SF-CF has failed to locate host nodename
The SF-CF initialization failed, status value
The specified guest domain cannot be connected. (nodename:nodename)
The stop request of the virtual machine instance-id failed.
The stop request of the virtual machine instancename failed.
The stop request of the virtual machine resource-id failed.
The stop request of the virtual machine ServerName failed.
(ADC, 6) Host <SysNode> with configuration <configfile> requested to join its cluster.
(ADC, 22) Attempting to clear the cluster Wait state for SysNode <sysnode> and reinitialize the Online state.
(ADC, 26) An out of sync modification request request1, request2 has been detected.
(ADC, 28) Dynamic modification finished.
(ADC, 29) Config file "CONFIG.rms" is absent or does not contain a valid entry, remaining in minimal configuration.
(ADC, 42) No remote host has provided configuration data within the interval specified by HV_WAIT_CONFIG. Running now the default configuration as specified in "CONFIG.rms"
(ADC, 50) hvdisp temporary file <filename> exceeded the size of <size> bytes, hvdisp process <pid> is restarted.
(ADC, 52) Waiting for application <userapplication> to finish its <request> before shutdown.
(ADC, 53) Waiting for application <app> to finish its switch to a remote host before shutdown.
(ADC, 54) Waiting for host <sysnode> to shut down.
(ADC, 55) No busy application found on this host before shutdown.
(ADC, 56) Waiting for busy or locked application <app> before shutdown.
(ADC, 66) Notified SF to recalculate its weights after dynamic modification.
(ADC, 71) Please check the bmlog for the list of environment variables on the local node.
(ADM, 35) Dynamic modification started with file <configfilename> from host <sysnode>.
(ADM, 92) Starting RMS now on all available cluster hosts
(ADM, 93) Ignoring cluster host <SysNode>, because it's in state: State
(ADM, 94) Starting RMS on host <SysNode> now!
(ADM, 101) Processing forced shutdown request for host SysNode. (request originator: RequestSysNode)
(ADM, 103) app: Shutdown in progress. AutoSwitchOver (ShutDown) attribute is set, invoking a switchover to next priority host
(ADM, 104) app: Shutdown in progress. AutoSwitchOver (ShutDown) attribute is set, but no other Online host is available. SwitchOver must be skipped!
(ADM, 108) Processing shutdown request for host SysNode. (request originator: RequestSysNode)
(ADM, 109) Processing shutdown request for all hosts. (request originator: SysNode)
(ADM, 112) local host is about to go down. CLI requests on this hosts are no longer possible
(ADM, 119) Processing hvdump command request for local host <sysnode>.
(ADM, 124) Processing forced shutdown request for all hosts. (request originator: SysNode)
(ADM, 127) debugging is on, watch your disk space in /var (notice #count)
(BAS, 33) Resource <resource> previously received detector report "DetReportsOfflineFaulted", therefore the application <app> cannot be switched to this host.
(BM, 9) Starting RMS monitor on host <sysnode>.
(BM, 27) Application <userapplication> does not transition to standby since it has one or more faulted or cluster exclusive online resources.
(BM, 34) RMS invokes hvmod in order to modify its minimal configuration with a startup configuration from the command line.
(BM, 35) RMS invokes hvmod in order to bring in a new host that is joining a cluster into its current configuration.
(BM, 36) RMS invokes hvmod in order to modify its minimal configuration with a configuration from a remote host while joining the cluster.
(BM, 37) RMS invokes hvmod in order to delete a host from its configuration while ejecting the host from the cluster.
(BM, 38) RMS invokes hvmod in order to bring a host from a cluster to which it is about to join.
(BM, 39) RMS invokes hvmod in order to run a default configuration.
(BM, 40) RMS starts the process of dynamic modification due to a request from hvmod utility.
(BM, 43) Package parameters for packagetype package <package> are <packageinfo>.
(BM, 44) Package parameters for <package1> / <package2> package not found.
(BM, 45) This RMS monitor has been started as <COMMAND>.
(BM, 47) RMS monitor has exited with the exit code <code >.
(BM, 48) RMS monitor has been normally shut down.
(BM, 50) RMS monitor is running in CF mode.
(BM, 55) The RMS-CF-CIP mapping in <configfile> for SysNode name <SysNode> has matched the CF name <cfname>.
(BM, 56) The RMS-CF-CIP mapping in <CONFIGFILENAME> for SysNode name <SYSNODE> has found the CF name to be <CFNAME> and the CIP name to be <CIPNAME>.
(BM, 57) The RMS-CF-CIP mapping in <configfile> for SysNode name <SysNode> has failed to find the CF name.
(BM, 60) The resulting configuration has been saved in <filename>, its checksum is <checksum>.
(BM, 61) A checksum verification request has arrived from host <sysnode>, that host's checksum is <xxxx>.
(BM, 62) The local checksum <xxxx> has been replied back to host <sysnode>.
(BM, 63) Host <sysnode> has replied the checksum <xxxx> equal to the local checksum. That host should become online now.
(BM, 64) Checksum request has been sent to host <hostname>.
(BM, 65) Package parameters for <package> package not found.
(BM, 84) The RMS-CF-CIP mapping in <configfilename> for SysNode name <sysnode> has found the CF name to be <cfname> and the CIP name to be <cipname>, previously defined as <olscfname>.
(BM, 87) The Process Id (pid) of this RMS Monitor is <PID>.
(BM, 91) Some scripts are still running. Waiting for them to finish before normal shutdown.
(BM, 100) Controlled application <app> is controlled by a scalable controller <controller>, but that application's AutoStartUp attribute is set to 1.
(BM, 102) Application <app> has a scalable controller, but that application has its AutoStartUp attribute set to 0.
(BM, 115) The base monitor on the local host has captured the lock.
(BM, 120) The RMS base monitor is locked in memory via mlockall().
(BM, 121) RMS monitor uses the <class> scheduling class for running scripts.
(CML, 3) *** New Heartbeat_Miss_Time = time sec.
(CML, 16) Turn log off by user.
(CML, 22) Modify log level, bmLogLevel = "loglevel".
(CTL, 3) Controller <controller> is requesting online application <app> on host <SysNode> to switch offline because more than one controlled applications are online.
(CTL, 4) Controller <controller> has its attribute AutoRecoverCleanup set to 1. Therefore, it will attempt to bring the faulted application offline before recovering it.
(CTL, 5) Controller <controller> has its attribute AutoRecoverCleanup set to 0. Therefore, it will not attempt to bring the faulted application offline before recovering it.
(CTL, 9) Controller <controller> has restored a valid combination of values for attributes <IgnoreOnlineRequest> and <OnlineScript>.
(CTL, 10) Controller <controller> has restored a valid combination of values for attributes <IgnoreOfflineRequest> and <OfflineScript>.
(CTL, 12) Controller <controller> has restored a valid combination of values for attributes <IgnoreStandbyRequest> and <OnlineScript>.
(CTL, 13) Controller <controller> does not propagate offline request to its controlled application(s) <app> because its attribute <IndependentSwitch> is set to 1.
(CTL, 14) Controller <controller> cannot autorecover application <app> because there is no online host capable of running this application.
(CTL, 15) Controller <controller> cannot autorecover application <app> because the host <SysNode> from the application's PriorityList is neither in Online, Offline, or Faulted state.
(CTL, 18) Scalable Controller <controller> from application <app1> cannot determine any online host where its controlled application(s) <app2> can perform the current request. This controller is going to fail now.
(CUP, 6) app Prio_list request not satisfied, trying again ...
(DET, 20) hvgdstartup file is empty.
(DET, 22) <resource>: received unexpected detector report "ReportedState" - ignoring itReason: Online processing in progress, detector report may result from an interim transition state.
(DET, 23) <resource>: received unexpected detector report "ReportedState" - ignoring itReason: Offline processing in progress, detector report may result from an interim transition state.
(DET, 25) <resource>: received unexpected detector report "ReportedState" - ignoring itReason: Standby processing in progress, detector report may result from an interim transition state.
(DET, 30) Resource <resource> previously received detector report "DetReportsOnlineWarn", the warning is cleared due to report "DetReportsOnline".
(DET, 32) Resource <resource> previously received detector report "DetReportsOfflineFaulted", the state is cleared due to report "report".
(GEN, 6) command ignores request for object object not known to that detector. Request will be repeated later.
(INI, 2) InitScript does not exist in hvenv.
(INI, 3) InitScript does not exist.
(INI, 5) All system objects initialized.
(INI, 6) Using filename for the configuration file.
(INI, 8) Restart after un-graceful shutdown (e.g. host failure): A persistent fault entry will be created for all userApplications, which have PersistentFault attribute set
(INI, 15) Running InitScript <InitScript>.
(INI, 16) InitScript completed.
(MIS, 10) The file filename can not be located during the cleanup of directory.
(SCR, 3) The detector that died is detector_name.
(SCR, 6) REQUIRED PROCESS RESTARTED: detector_name restarted.
(SCR, 7) REQUIRED PROCESS NOT RESTARTED: detector_name is no longer needed by the configuration.
(SCR, 16) Resource <resource> WarningScript has completed successfully.
(SCR, 19) Failed to execute OfflineDoneScript with resource <resource>: errorreason.
(SCR, 22) The detector <detector> with pid <pid> has been terminated. The time it has spent in the user and kernel space is <usertim> and <kerneltime> seconds respectively.
(SCR, 23) The script with pid <pid> has terminated. The time it has spent in the user and kernel space is <usertime> and <kerneltime> seconds respectively.
(SHT, 16) RMS on node SysNode has been shut down with command.
(SWT, 9) app: AutoStartAct(): object is already in stateOnline!
(SWT, 10) app: Switch request forwarded to a responsible host: SysNode.
(SWT, 15) app: Switch request forwarded to the node currently online: SysNode.
(SWT, 17) app: target host of switch request is already the currently active host, sending the online request now!
(SWT, 27) Cluster host <SysNode> is not yet online for application <app>.
(SWT, 29) HV_AUTOSTARTUP_IGNORE list of cluster hosts to ignore when autostarting is: SysNode.
(SWT, 38) Processing forced switch request for application app to node SysNode.
(SWT, 39) Processing normal switch request for application app to node SysNode.
(SWT, 40) Processing forced switch request for Application app.
(SWT, 41) Processing normal switch request for Application app.
(SWT, 48) A controller requested switchover for the application <object> is attempted although the host <onlinehost> where it used to be Online is unreachable. Caused by the use of the force flag the RMS secure mechanism has been overriden, switch request is processed. In case that host is in Wait state the switchover is delayed until that host becomes Online, Offline, or Faulted.
(SWT, 49) Application <app> will not be switched Online on host <oldhost> because that host is not Online. Instead, it will be switched Online on host <newhost>.
(SWT, 50) Application <app> is busy. Switchover initiated from a remote host <remotenode> is delayed on this local host <localnode> until a settled state is reached.
(SWT, 51) Application <app> is busy performing standby processing. Switchover initiated due to a shutdown of the remote host <remotenode> is delayed on this local host <localnode> until Standby processing finishes.
(SWT, 52) Application <app> is busy performing standby processing. Therefore, the contracting process and a decision for its AutoStartUp is delayed on this local host <localnode> until Standby processing finishes.
(SWT, 53) Application <app> is busy performing standby processing. The forced switch request is delayed on this local host <localnode> until Standby processing finishes.
(SWT, 61) Processing request to enter Maintenance Mode for application app.
(SWT, 62) Processing request to leave Maintenance Mode for application app.
(SWT, 63) Forwarding Maintenance Mode request for application app to the host SysNode, which is currently the responsible host for this userapplication.
(SWT, 64) Request to leave Maintenance Mode for application app discarded. Reason: Application is not in Maintenance Mode.
(SWT, 65) Processing request to leave Maintenance Mode for application app, which was forwarded from host SysNode. Nothing to do, application is not in Maintenance Mode.
(SWT, 66) Processing of Maintenance Mode request for application app is finished, transitioning into stateMaint now.
(SWT, 67) Processing of Maintenance Mode request for application app is finished, transitioning out of stateMaint now.
(SWT, 70) AutoStartUp for application<app> is invoked though not all neccessary cluster hosts are Online, because PartialCluster attribute is set.
(SWT, 71) Switch requests for application <app> are now permitted though not all neccessary cluster hosts are Online, because PartialCluster attribute is set.
(SWT, 73) Any AutoStart or AutoStandby for app is bypassed. Reason: userApplication is in Maintenance Mode
(SWT, 74) Maintenance Mode request for application app discarded. Reason: Application is busy or locked.
(SWT, 75) Maintenance Mode request for application app discarded. Reason: Application is Faulted.
(SWT, 76) Maintenance Mode request for application app discarded. Reason: A controlled application is not ready to leave Maintenance Mode.
(SWT, 77) Maintenance Mode request for application app discarded. Reason: Application is controlled by another application and has "ControlledSwitch" attribute set.
(SWT, 78) Maintenance Mode request for application app discarded. Reason: Application has not yet finished its state initialisation.
(SWT, 79) Maintenance Mode request for application app discarded. Reason: Some resources are not in an appropriate state for safely returning into active mode. A "forceoff" request may be used to override this security feature.
(SWT, 80) Maintenance Mode request for application app discarded. Reason: Sysnode SysNode is in "Wait" state.
(SWT, 82) The SysNode SysNode is seen as Online, but it is not yet being added to the priority list of any controlled or controlling userApplication because there is ongoing activity in one or more applications (eg. <app> on <SysNode>).
(SWT, 83) The SysNode SysNode is seen as Online, and now all userApplications have no ongoing activity - SysNode being added to priority lists.
(SWT, 85) The userApplication app is in state Inconsistent on host SysNode, the priority hvswitch request is being redirected there in order to clear the inconsistentcy.
(SWT, 86) The userApplication app is in state Inconsistent on host SysNode1, the hvswitch request to host SysNode2 is being redirected there in order to clear the inconsistency.
(SWT, 87) The userApplication app is in state Maintenance.  Switch request skipped.
(SWT, 88) The following node(s) were successfully killed by the forced application switch operation: hosts
(SWT, 89) Processing forced switch request for resource resource to node sysnode.
(SWT, 90) Processing normal switch request for resource resource to node sysnode.
(SYS, 2) This host has received a communication verification request from host <SysNode>. A reply is being sent back.
(SYS, 3) This host has received a communication verification reply from host <SysNode>.
(SYS, 5) This host is sending a communication verification request to host <SysNode>.
(SYS, 9) Attempting to shut down the cluster host SysNode by invoking a Shutdown Facility via (sdtool -k hostname).
(SYS, 12) Although host <hostname> has reported online, it does not respond with its checksum. That host is either not reachable, or does not have this host <localhost> in its configuration. Therefore, it will not be brought online.
(SYS, 51) Remote host <SysNode> replied correct checksum out of sync.
(UAP, 10) app: received agreement to go online. Sending Request Online to the local child now.
(UAP, 13) appli AdminSwitch: application is expected to go online on local host, sending the online request now.
(UAP, 26) app received agreement to go online. Sending RequestOnline to the local child now.
(UAP, 31) app:AdminSwitch: passing responsibility for application to host <SysNode> now.
(UAP, 46) Request <request> to application <app> is ignored because this application is in state Unknown.
(US, 2) FAULT RECOVERY ATTEMPT: The object object has faulted and its AutoRecover attribute is set. Attempting to recover this resource by running its OnlineScript.
(US, 3) FAULT RECOVERY FAILED: Re-running the OnlineScript for object failed to bring the resource Online.
(US, 4) FAULT RECOVERY SUCCEEDED: Resource resource has been successfully recovered and returned to the Online state.
(US, 7) object: Transitioning into a Fault state caused by a persistent Fault info
(US, 8) Cluster host SysNode has been successfully status.
(US, 9) Cluster host SysNode has become online.
(US, 11) Temporary heartbeat failure disappeared. Now receiving heartbeats from cluster host hostname again.
(US, 12) Cluster host SysNode has become Faulted. A shut down request will be sent immediately!
(US, 13) Cluster host SysNode will now be shut down!
(US, 16) app: Online processing finished!
(US, 17) app: starting Online processing.
(US, 18) app: starting Offline processing.
(US, 19) app: starting Offline (Deact) processing.
(US, 20) app: Offline (Deact) processing finished!
(US, 21) app: Offline processing finished!
(US, 22) app: starting PreCheck.
(US, 24) app: Fault processing finished!
(US, 25) app: Collecting outstanding Faults ....
(US, 26) app: Fault processing finished!Starting Offline processing.
(US, 27) app: precheck successful.
(US, 30) app: Offline processing after Fault finished!
(US, 32) FAULT RECOVERY SKIPPED! userApplication is already faulted. No fault recovery is possible for object object!
(US, 34) app: Request standby skipped -- application must be offline or standby for standby request to be honored.
(US, 35) app: starting Standby processing.
(US, 36) app: Standby processing finished!
(US, 37) app: Standby processing skipped since this application has no standby capable resources.
(US, 40)  app: Offline processing due to hvshut finished!
(US, 41) The userApplication <userapplication> has gone into the Online state after Standby processing.
(US, 44) resource: Fault propagation to parent ends here! Reason is either a MonitorOnly attribute of the child reporting the Fault or the "or" character of the current object
(US, 46) app: Processing of Clear Request finished. resuming Maintenance Mode.
(US, 56) The userApplication userapplication is already Online at RMS startup time. Invoking an Online request immediately in order to clean up possible inconsistencies in the state of the resources.
(WLT, 2) Resource resource's ScriptType (script) has exceeded the ScriptTimeout of timeout seconds.
(WLT, 4) Object object's script has been killed since that object has been deleted.
(WLT, 7) Sending SIGNAL to script <script> (pid) now
(WRP, 19) RMS logging restarted on host <SysNode> due to a hvlogclean request.
(WRP, 20) This switchlog is being closed due to a hvlogclean request. RMS continues logging in a new switchlog that is going to be opened immediately. New detector logs are also going to be reopened right now.
(WRP, 21) A message cannot be sent into a Unix message queue from the process <pid>, <process>, after <number> attempts in the last <seconds> seconds. Still trying.
(WRP, 22) A message cannot be sent into a Unix message queue id <queueid> by the process <pid>, <process>.
(WRP, 26) Child process <cmd> with pid <pid> has been killed because it has exceeded its timeout period.
(WRP, 27) Child process <cmd> with pid <pid> will not be killed though it has exceeded its timeout period.
(WRP, 36) Time synchronization has been re-established between the local node and cluster host SysNode.
(WRP, 37) The package parameters of the package <package> on the remote host <hostname> are: Version = <version>, Load = <load>.
(WRP, 38) The Process Id (pid) and the startup time of the RMS monitor on the remote host <hostname> are <pid> and <startuptime>.
(WRP, 49) The base monitor on the local host is unable to the ping the echo port on the remote host SysNode.
(WRP, 50) The base monitor on the local host is able to ping the echo port on the remote host SysNode, but is unable to communicate with the base monitor on that host.
(WRP, 53) Current heartbeat mode is <mode>.
(WRP, 59) The cluster host <SysNode> does not support ELM heartbeat. ELM heartbeat does not start. Use UDP heartbeat only.
(WRP, 63) The ELM heartbeat started for the cluster host <SysNode>.
(WRP, 66) The elm heartbeat detects that the cluster host <SysNode> has become offline.
(ADC, 19) Clearing the cluster Waitstate for SysNode <sysnode>, by faking a successful host elimination! If <sysnode> is actuality still Online, and/or if any applications are Online, this hvutil -u command may result in data corruption!
(ADC, 23) File <filename> can't be opened: <errortext>.
(ADC, 24) File cannot be open for read.
(ADC, 51) hvshut utility has timed out.
(ADC, 65) Since RMS on this host has already encountered other Online nodes, it will remain running. However, no nodes reporting incorrect checksums will be brought Online.
(ADM, 61) object is deactivated. Switch request skipped.
(ADM, 65) System hostname is currently down !!!!
(ADM, 69) Shutting down RMS while resource resource is not offline.
(ADM, 80) Application <userapplication> has a not null attribute ControlledSwitch. Therefore, it should be switched from the controller. 'hvswitch' command ignored.
(ADM, 105) Shutdown on targethost <sysnode> in progress. Switch request for application <object> skipped!
(ADM, 110) Sysnode <node> has been marked as going down, but failed to become Offline. Check for a possibly hanging shutdown. Note that this SysNode cannot re-join the cluster without having finished its shutdown to avoid cluster inconsistency!
(ADM, 111) Timeout occured for local hvshut request. Reporting a failure back to the command now!
(ADM, 113) Terminating due to a timeout of RMS shutdown. All running scripts will be killed!
(ADM, 114) userapplication: Shutdown in progress. AutoSwitchOver (ShutDown) attribute is set, but the userApplication failed to reach a settled Offline state. SwitchOver must be skipped!
(ADM, 115) Received "old style" shutdown contract, though no host with RMS 4.0 is member of the cluster. Discarding it!
(ADM, 116) Received "new style" shutdown contract, though at least one host with RMS 4.0 is member of the cluster. Discarding it!
(ADM, 129) Shutdown on targethost <sysnode> in progress. Switch request for resource <resource> skipped!
(BAS, 1) Object <object> is not offline!
(BAS, 8) Object <object> has no rName attribute. The rName attribute is normally used by the generic detector to determine which resource to monitor. Be sure that your detector can function without an rName attribute.
(BAS, 22) DetectorStartScript for kind <kind> is not defined in either .us or hvgdstartup files, therefore RMS will be using default <gkind -kkind -ttimeperiod>.
(BM, 4) The CF cluster timeout <cftimeout> exceeds the RMS timeout <rmstimeout>. This may result in RMS node elimination request before CF timeout is exceeded. Please check the CF timeout specified in "/etc/default/cluster.config" and the RMS heartbeat miss time specified by hvcm '-h' option.
(BM, 8) Failed sending message <message> to object <object> on host <host>.
(BM, 28) Application <userapplication> has a not null attribute ControlledHvswitch. Therefore, it should be switched on/off from the controller. 'hvutil -f/-c' command ignored.
(BM, 30) Ignoring dynamic modification failure for object <object>:attribute <attribute> is invalid.
(BM, 31) Ignoring dynamic modification failure at line linenumber: cannot modify attribute <attribute> of object <object> with value <value> because the attribute does not exist.
(BM, 53) The RMS-CF-CIP mapping cannot be determined for any host due to the CIP configuration file <configname> cannot be opened. Please verify all entries in <configfilename> are correct and that CF and CIP are fully configured.
(BM, 70) Some messages were not sent out during RMS shutdown.
(BM, 76) Failed to find "rmshb" port address in /etc/services. The "hvutil -A" command will fail until a port entry for "rmshb" is made in the /etc/services file and RMS is restarted.
(BM, 77) Failed to allocate a socket for "rmshb" port monitoring.
(BM, 78) The reserved port for "rmshb" appears to be in use. The "rmshb" port is reserved in the /etc/services file but another process has it bound already. Select another port by editing the /etc/services file and propagate this change to all nodes in the cluster and then restart RMS.
(BM, 79) Failed to listen() on the "rmshb" port.
(BM, 82) A message to host <remotehost> failed to reach that host after <count> delivery attempts. Communication with that host has been broken.
(BM, 83) Failed to execute the fcntl system call.
(BM, 85) Application <userapplication> has a not null attribute attribute. Therefore, it should be deactivated from the controller. 'hvutil -d' command ignored.
(BM, 112) Controller <controller> has its attribute Follow set to 1, while its ClusterExclusive attribute is set to 0. However, it is controlling, directly or indirectly via a chain of Follow controllers, an application <application> -- that application contains a resource named <resource> which attribute ClusterExclusive is set to 1. This is not allowed due to a potential problem of that resource becoming Online on more than one host. Cluster exclusive resources must be controlled by cluster exclusive Follow controllers.
(BM, 119) The RMS base monitor failed to be locked in memory via mlockall() - <errortext>.
(CTL, 6) Controller <controller> has detected more than one controlled application Online.
(CTL, 7) Controller <controller> has its attribute <IgnoreOnlineRequest> set to 1 and its OnlineScript is empty. Therefore, a request Online to the controller might fail to bring the controlled application Online.
(CTL, 8) Controller <controller> has its attribute <IgnoreOffineRequest> set to 1 and its OffineScript is empty. Therefore, a request Offline to the controller might fail to bring the controlled application Offline.
(CTL, 11) Controller <controller> has its attributes StandbyCapable set to 1, its attribute <IgnoreStandbyRequest> set to 1 and its OnlineScript is empty. Therefore, a request Standby to the controller might fail to bring the controlled application Standby.
(CUP, 1) userApplication: priority list conflict detected, trying again ...
(CUP, 9) userApplication: Switch Request skipped, processing of current online host contract is not yet settled.
(CUP, 11) userapplication offline processing failed! The application is still partially online. The switch request is being skipped.
(CUP, 12) userApplication switch request skipped, required target node is not ready to go online!
(CUP, 13) userApplication switch request skipped, no available node is ready to go online!
(CUP, 14) userApplication did not get a response from <sender>.
(CUP, 15) userApplication: targethost <host> is no longer available.
(CUP, 16) userapplication offline processing failed! The application is still partially online. The switch request is being skipped.
(CUP, 17) userApplication: current online host request of host "host" accepted, local inconsistency has been overridden with the forced flag.
(CUP, 18) userApplication: current online host request of host "host" denied due to a local inconsistent state.
(CUP, 19) userApplication: is locally online, but is inconsistent on another hostTrying to force a CurrentOnlineHost contract ...
(CUP, 20) userApplication: AutoStart skipped, application is inconsistent on host "hostname".
(CUP, 21) userApplication: FailOver skipped, application is inconsistent on host "hostname".
(CUP, 22) userApplication: Switch Request skipped, application is inconsistent on host "hostname".
(CUP, 23) userApplication: Switch Request skipped, application is inconsistent on local host.
(CUP, 24) userApplication: Switch Request processed, local inconsistency has been overridden with the forced flag.
(CUP, 25) userApplication is currently in an inconsistent state.The switch request is being skipped.Clear inconsistency first or you may override this restriction by using the forced switch option.
(CUP, 26) userApplication: LastOnlineHost conflict detected. Processing an AutoStart or PrioSwitch CurrentOnlineHost Contract with OnlinePriority enabled. TargetHost of Switch request is host "host", but the local host is the LastOnlineHost. Denying the request.
(CUP, 27) userApplication: LastOnlineHost conflict occurred. Skipping local Online request, because host "host" has a conflicting LastOnlineHost entry.
(CUP, 28) userApplication: priority switch skipped, cannot get a deterministic information about the LastOnlineHost. Tried to switch to "hostname", but "loh" claims to be the LastOnlineHost. Conflict may be resolved by system administrator intervention (specifying explicitly the targethost in the hvswitch call).
(CUP, 29) userApplication: LastOnlineHost conflict occurred. Timestamps of conflicting LastOnlineHosts entries do not allow a safe decision, because their difference is lower than time seconds.Conflict must be resolved by system administrator intervention (invalidate LastOnlineHost entry via "hvutil -i userApplication" and invoke an explicate hvswitch call).
(CUP, 30) userApplication: Denying maintenance mode request. userApplication is busy or is in stateFaulted.
(CUP, 31) userApplication: maintenance mode request was denied by the remote SysNode "SysNode" because userApplication is busy or is in stateFaulted or not ready to leave Maintenance Mode. See remote switchlog for details
(CUP, 32) userApplication: Denying maintenance mode request. The following object(s) are not in an appropriate state for safely returning to normal operation: <resource>
(CUP, 33) userApplication: Denying maintenance mode request. The initialization of the state of the userApplication is not yet complete.
(CUP, 34) userApplication: LastOnlineHost conflict detected. Processing an AutoStart or PrioSwitch CurrentOnlineHost Contract with OnlinePriority enabled. TargetHost of Switch request is host "host", but the local host is the LastOnlineHost. The local host takes over Switch request.
(DET, 29) Resource <resource>: received detector report DetReportsOnlineWarn. The WarningScript "warningscript" will be run.
(DET, 31) Resource <resource> received detector report "DetReportsOfflineFaulted", the posted state will become <offlinefault> until one of the subsequent reports "DetReportsOffline", "DetReportsOnline", "DetReportsStandby" or "DetReportsFaulted"
(DET, 35) Resource <resource> received detector report "DetReportsOnlineWarn", the WarningScript is not defined and will not be run.
(SCR, 17) Resource <resource> WarningScript has failed with status status.
(SCR, 25) Controller <resource> StateChangeScript has failed with status status.
(SCR, 31) AppStateScript of userApplication userapplication has failed with status status.
(SWT, 1) The 'AutoStartUp' attribute is set and the HV_AUTOSTART_WAIT time for the user application <appli> has expired, without an automatic start up having yet taken place. Reason: not all necessary cluster hosts are online!
(SWT, 5) AutoStartUp skipped by object. Reason: object is faulted!
(SWT, 6) AutoStartUp skipped by object. Reason: Fault occurred during initialization!
(SWT, 7) AutoStartUp skipped by object. Reason: object is deactivated!
(SWT, 8) AutoStartUp skipped by object. Reason: not all necessary cluster hosts are online!
(SWT, 11) object: no responsible node available, switch request skipped.
(SWT, 12) object is busy or locked, switch request skipped.
(SWT, 13) Not all necessary cluster hosts for application <userapplication> are online, switch request is being skipped. If the application should be brought online anyway, use the force flag. Be aware, however, that forcing the application online could result in an inconsistent cluster if the application is online somewhere else!
(SWT, 14) object is deactivated, switch request skipped.
(SWT, 16) Switch request skipped, no target host found or target host is not ready to go online!
(SWT, 18) object: is not ready to go online on local host, switch request skipped!
(SWT, 19) object: is not ready to go online on local host, trying to find another host.
(SWT, 21) object: local node has faulted or offlinefaulted descendants, no other node is ready to go online, switchover skipped.
(SWT, 22) object: local node has faulted or offlinefaulted descendants, forwarding switchover request to next host: targethost.
(SWT, 23) object is busy or locked, deact request skipped.
(SWT, 24) object is deactivated, switch request skipped.
(SWT, 28) hostname is unknown locally!
(SWT, 30) <object> was Online on <onlinehost>, which is not reachable. Switch request must be skipped to ensure data integrity.  This secure mechanism may be overridden with the forced flag (-f) of the hvswitch command.  WARNING: Ensure, that no  further access to the data is performed by <onlinehost>, otherwise the use of the -f flag may break data consistency!
(SWT, 31) <object> was Online on <onlinehost>, which is not reachable. Caused by the use of the force flag the RMS secure mechanism has been overridden, Switch request is processed.
(SWT, 32) <object> is currently in an inconsistent state on local host. The switch request is being skipped. Clear inconsistency first or you may override this restriction by using the forced switch option.
(SWT, 33) <object> is not ready to go online on the local host. Due to a local inconsistent state no remote targethost may be used. The switch request is being skipped.
(SWT, 34) <object> is not ready to go online on local host trying to find another host.
(SWT, 35) object is not ready to go online on local host switch request skipped.
(SWT, 36) <sysnode> is in Wait state, switch request skipped.
(SWT, 37) AutoStartUp for application <userapplication> is ignored since hvmod had been invoked with the flag '-i'.
(SWT, 58) Processing policy switch request for application userapplication. The cluster host sysnode is in a Wait state, no switch request can be processed. The application will go offline now.
(SWT, 59) Processing policy switch request for application userapplication. No cluster host is available to take over this application. The application will go offline now.
(SWT, 60) Processing policy switch request for application userapplication which is in state Standby. The application will go offline now.
(SWT, 69) AutoStartUp for application <userapplication> is ignored since, the environment variable HV_AUTOSTARTUP is set to 0.
(SWT, 72) userapplication received Maintenance Mode request from the controlling userApplication. The request is denied, because the state is either Faulted or Deact or the application is busy or locked.
(SYS, 16) The RMS internal SysNode name "sysnode" is not compliant with the naming convention of the Reliant Cluster product. A non-compliant setting is possible, but will cause all RMS commands to accept only the SysNode name, but not the HostName (uname -n) of the cluster nodes!
(SYS, 18) The SysNode <sysnode> does not follow the RMS naming convention for SysNodes. To avoid seeing this message in the future, please rename the SysNode to use the CF-based name of the form "<CFname>RMS" and restart the RMS monitor.
(SYS, 88): No heartbeat from cluster host sysnode within the last 10 seconds. This may be a temporary problem caused by high system load. RMS will react if this problem persists for time seconds more.
(SYS, 99) The attribute <alternateip>  specified for SysNode <sysnode> should not be used in CF mode. Ignoring it.
(UAP, 2) object got token token from node node.TOKEN SKIPPED! Reason: reason.
(UAP, 3) object: double fault occurred and Halt attribute is set. Halt attribute will be ignored, because no other cluster host is available.
(UAP, 4) object has become online, but is also in the HV_AUTOSTARTUP_IGNORE list of cluster hosts to be ignored on startup! The Cluster may be in an inconsistent condition!
(UAP, 11) object is not ready to go online on local node. Online processing skipped.
(UAP, 12) object: targethost of switch request: <host> no longer available, request skipped.
(UAP, 14) object: is not ready to go online on local host. Switch request skipped.
(UAP, 18) SendUAppLockContract(): invalid token: token.
(UAP, 25) AutoStartUp skipped by object. Reason: not all necessary, cluster hosts are online!
(UAP, 30) object is not ready to go online on local host. Trying to find another host.
(UAP, 52)  userapplication: double fault occurred and Halt attribute is set. Halt attribute will be ignored, because attribute AutoSwitchOver is set to attrvalue.
(US, 10) object: userApplication transitions into stateOnline, though it was faulted before according to the persistent Fault info. Check for possible inconsistencies
(US, 23) appli: double fault occurred, processing terminated.
(US, 28) object: PreCheck failedswitch request will be canceled now and not be forwarded to another host, because this was a directed switch request, where the local host has explicitely been specified as targethost.
(US, 29) object: PreCheck failedtrying to find another host ...
(US, 43) object: PreCheck failedStandby request canceled.
(US, 45) object: PreCheck failedswitch request will be canceled now and not be forwarded to another host, because AutoSwitchOver=ResourceFailure is not set.
(US, 47) userapplication: Processing of Clear Request resulted in a Faulted state. Resuming Maintenance Mode nevertheless.It is highly recommended to analyse and clear the fault condition before leaving Maintenance Mode!
(US, 55) object: PreCheck failed, because the controller userApplication of type LOCAL "userapplication" is not ready to perform a PreCheck.
(WLT, 6) Resource resource's script did not terminate gracefully after receiving SIGTERM.
(WRP, 11) Message send failed, queue id <queueid>, process <process>, <name>, to host <node>.
(WRP, 39) The RMS base monitor has not been able to process timer interrupts for the last n seconds. This delay may have been caused by an unusually high OS load. The differences between respective values of the times() system call are for tms_utime utime, for tms_stime stime, for tms_cutime cutime, and for tms_cstime cstime. If this condition persists, then normal RMS operations can no longer be guaranteed; it can also lead to a loss of heartbeats with remote hosts and to an elimination of the current host from the cluster.
(WRP, 41) The interconnect entry <interconnect> specified for SysNode <sysnode> has the same IP address as that of the interface <existinginterconnect>.
(WRP, 51) The 'echo' service for udp may not have been turned on, on the local host. Please ensure that the echo service is turned on.
(ADC, 1) Since this host <hostname> has been online for no more than time seconds and due to the previous error, it will shut down now.
(ADC, 2) Since not all of the applications are offline or faulted on this host <hostname>, and due to the previous error, it will remain online, but neither automatic nor manual switchover will be possible on this host until <detector> detector will report offline or faulted.
(ADC, 3) Remote host <hostname> reported the checksum (remotechecksum) which is different from the local checksum (localchecksum).
(ADC, 4) Host <hostname> is not in the local configuration.
(ADC, 5) Since this host <hostname> has been online for more than time seconds, and due to the previous error, it will remain online, but neither automatic nor manual switchover will be possible on this host until <detector> detector will report offline or faulted.
(ADC, 15) Global environment variable <envattribute> is not set in hvenv file.
(ADC, 17) <SysNode> is not in the Wait state, hvutil -u request skipped!
(ADC, 18) Local environmental variable <envattribute> is not set up in hvenv file.
(ADC, 20) <SysNode> is not in the Wait state. hvutil -o request skipped!
(ADC, 25) Application <userapplication> is locked or busy, modification request skipped.
(ADC, 27) Dynamic modification failed.
(ADC, 30) HV_WAIT_CONFIG value <seconds> is incorrect, using 120 instead.
(ADC, 31) Cannot get the NET_SEND_Q queue.
(ADC, 32) Message send failed during the file copy of file <filename>.
(ADC, 33) Dynamic modification timeout.
(ADC, 34) Dynamic modification timeout during start up - bm will exit.
(ADC, 35) Dynamic modification timeout, bm will exit.
(ADC, 37) Dynamic modification failed: cannot make a non-critical resource <resource> critical by changing its attribute MonitorOnly to 0 since this resource is not online while it belongs to an online application <userapplication>; switch the application offline before making this resource critical.
(ADC, 38) Dynamic modification failed: application <userapplication> has no children, or its children are not valid resources.
(ADC, 39) The putenv() has failed (failurereason)
(ADC, 41) The Wizard action failed (command)
(ADC, 43) The file transfer for <filename> failed in "command". The dynamic modification will be aborted.
(ADC, 44) The file transfer for <filename> failed in "command". The join will be aborted.
(ADC, 45) The file transfer for <filename> failed in "command" with errno <errno> - errorreason.The dynamic modification will be aborted.
(ADC, 46) The file transfer for <filename> failed with unequal write byte count, expected expectedvalue actual actualvalue. The dynamic modification will be aborted.
(ADC, 47) RCP fail:can't open file filename.
(ADC, 48) RCP fail:fseek errno errno.
(ADC, 49) Error checking hvdisp temporary file <filename>, errno <errno>, hvdisp process pid <processid> is restarted.
(ADC, 57) An error occurred while writing out the RMS configuration for the joining host. The hvjoin operation is aborted.
(ADC, 58) Failed to prepare configuration files for transfer to a joining host. Command used <command>.
(ADC, 59) Failed to store remote configuration files on this host. Command used <command>.
(ADC, 60) Failed to compress file <file>. Command used <command>.
(ADC, 61) Failed to shut down RMS on host <host>.
(ADC, 62) Failed to shut down RMS on this host, attempting to exit RMS.
(ADC, 63) Error <errno> while reading file <file>, reason: <reason>.
(ADC, 68) Error <errno> while opening file <file>, reason:<reason>.
(ADC, 70) Message sequence # is out of sync - File transfer of file <filename> has failed.
(ADM, 3) Dynamic modification failed: some resource(s) supposed to come offline failed.
(ADM, 4) Dynamic modification failed: some resource(s) supposed to come online failed.
(ADM, 5) Dynamic modification failed: object <object> is not linked to any application.
(ADM, 6) Dynamic modification failed: cannot add new resource <resource> since another existing resource with this name will remain in the configuration.
(ADM, 7) Dynamic modification failed: cannot add new resource <resource> since another existing resource with this name will not be deleted.
(ADM, 8) Dynamic modification failed: cycle of length <cycle_length> detected in resource <resource> -- <cycle>.
(ADM, 9) Dynamic modification failed: cannot modify resource <resource> since it is going to be deleted.
(ADM, 11) Dynamic modification failed: cannot delete object <resource> since it is a descendant of another object that is going to be deleted.
(ADM, 12) Dynamic modification failed: cannot delete <resource> since its children will be deleted.
(ADM, 13) dynamic modification failed:object <resource> is in state  <state> while needs to be in one of stateOnline, stateStandby, stateOffline, stateFaulted, or stateUnknown.
(ADM, 14) Dynamic modification failed: cannot link to or unlink from an application <userapplication>.
(ADM, 15) Dynamic modification failed: parent object <parentobject> is not a resource.
(ADM, 16) Dynamic modification failed: child object <childobject> is not a resource.
(ADM, 17) Dynamic modification failed: cannot link parent <parentobject> and child <childobject> since they are already linked.
(ADM, 18) Dynamic modification failed: cannot link a faulted child <childobject> to parent <parentobject> which is not faulted.
(ADM, 19) Dynamic modification failed: cannot link child <childobject> which is not online to online parent <parentobject>.
(ADM, 20) Dynamic modification failed: cannot link child <childobject> which is neither offline nor standby to offline or standby parent <parentobject>.
(ADM, 21) Dynamic modification failed: Cannot unlink parent <parentobject> and child <childobject> since they are not linked.
(ADM, 22) Dynamic modification failed: child <childobject> will be unlinked but not linked back to any of the applications.
(ADM, 23) Dynamic modification failed: sanity check did not pass for linked or unlinked objects.
(ADM, 24) Dynamic modification failed: object <object> that is going to be linked or unlinked will be either deleted, or unlinked from all applications.
(ADM, 25) Dynamic modification failed: parent object <parentobject> is absent.
(ADM, 26) Dynamic modification failed: parent object <parentobject> is neither a resource nor an application.
(ADM, 27) Dynamic modification failed -- child object <childobject> is absent.
(ADM, 28) Dynamic modification failed: child object <childobject> is not a resource.
(ADM, 29) Dynamic modification failed -- parent object <parentobject> is absent.
(ADM, 30) Dynamic modification failed: parent object <parentobject> is not a resource.
(ADM, 31) Dynamic modification failed: child object <childobject> is absent.
(ADM, 32) Dynamic modification failed: child object <childobject> is not a resource.
(ADM, 33) Dynamic modification failed: object <object> cannot be deleted since either it is absent or it is not a resource.
(ADM, 34) Dynamic modification failed: deleted object <object> is neither a resource nor an application nor a host.
(ADM, 37) Dynamic modification failed: resource <object> cannot be brought online and offline/standby at the same time.
(ADM, 38) Dynamic modification failed: existing parent resource <parentobject> is in state <state> but needs to be in one of stateOnline, stateStandby, stateOffline, stateFaulted, or stateUnknown.
(ADM, 39) Dynamic modification failed: new resource object which is a child of application <userapplication> has its HostName <hostname> the same as another child of application <userapplication>.
(ADM, 40) Dynamic modification failed: a new child <child_object> of existing application <userapplication> does not have its HostName set to a name of any SysNode.
(ADM, 41) Dynamic modification failed: existing child <childobject> is not online, but needs to be linked with <parentobject> which is supposed to be brought online.
(ADM, 42) Dynamic modification failed: existing child <childobject> is online, but needs to be linked with <parentobject> which is supposed to be brought offline.
(ADM, 43) Dynamic modification failed: linking the same resource <childobject> to different applications <userApplication1> and <userApplication2>.
(ADM, 44) Dynamic modification failed: object <object> does not have an existing parent.
(ADM, 45) Dynamic modification failed: HostName is absent or invalid for resource <object>.
(ADM, 46) Dynamic modification failed: linking the same resource <object> to different applications <userapplication1> and <userapplication2>.
(ADM, 47) Dynamic modification failed: parent object <parentobject> belongs to a deleted application.
(ADM, 48) Dynamic modification failed: child object <childobject> belongs to a deleted application.
(ADM, 49) Dynamic modification failed: deleted object <objectname> belongs to a deleted application.
(ADM, 50) Dynamic modification failed: cannot delete object <object> since it is a descendant of a new object.
(ADM, 51) Dynamic modification failed: cannot link to child <childobject> since it will be deleted.
(ADM, 52) Dynamic modification failed: cannot link to parent <parentobject> since it will be deleted as a result of deletion of object <object>.
(ADM, 53) Dynamic modification failed: <node> is absent.
(ADM, 54) Dynamic modification failed: NODE <object>, attribute <attribute> is invalid.
(ADM, 55) Cannot create admin queue.
(ADM, 57) hvdisp - open failed - filename.
(ADM, 58) hvdisp - open failed - filename : errormsg.
(ADM, 59) userapplication: modification is in progress, switch request skipped.
(ADM, 60) <resource> is not a userApplication object, switch request skipped!
(ADM, 62) The attribute <ShutdownScript> may not be specified for object <object>.
(ADM, 63) System name <sysnode> is unknown.
(ADM, 67) sysnode Cannot shut down.
(ADM, 70) NOT ready to shut down.
(ADM, 75) Dynamic modification failed: child <resource> of userApplication object <userapplication> has HostName attribute <hostname> common with other children of the same userApplication.
(ADM, 76) Modification of attribute <attribute> is not allowed within existing object <object>.
(ADM, 77) Dynamic modification failed: cannot delete object object since its state is currently being asserted.
(ADM, 78) Dynamic modification failed: PriorityList <prioritylist> does not include all the hosts where the application <userapplication> may become Online. Make sure that PriorityList contains all hosts from the HostName attribute of the application's children.
(ADM, 79) Dynamic modification failed: PriorityList <prioritylist> includes hosts where the application <userapplication> may never become Online. Make sure PriorityList contains only hosts from the HostName attributes of the application's children.
(ADM, 81) Dynamic modification failed: application <userapplication> may not have more than <maxcontroller> parent controllers as specified in its attribute MaxControllers.
(ADM, 82) Dynamic modification failed: cannot delete type <object> unless its state is one of Unknown, Wait, Offline or Faulted.
(ADM, 83) Dynamic modification failed: cannot delete SysNode <sysnode> since this RMS monitor is running on this SysNode.
(ADM, 84) Dynamic modification failed: cannot add SysNode <sysnode> since its name is not valid.
(ADM, 85) Dynamic modification failed: timeout expired, timeout symbol is <symbol>.
(ADM, 86) Dynamic modification failed: application <userapplication> cannot be deleted since it is controlled by the controller <controller>.
(ADM, 87) Dynamic modification failed: only local attributes such as ScriptTimeout, DetectorStartScript, NullDetector or MonitorOnly can be modified during local modification (hvmod -l).
(ADM, 88) Dynamic modification failed: attribute <attribute> is modified more than once for object <object>.
(ADM, 89) Dynamic modification failed: cannot rename existing object <sysnode> to <othersysnode> because either there is no object named <sysnode>, or another object with the name <othersysnode> already exists, or a new object with that name is being added, or the object is not a resource, or it is a SysNode, or it is a controlled application which state will not be compatible with its controller.
(ADM, 90) Dynamic modification failed: cannot change attribute Resource of the controller object <controllerobject> from <oldresource> to <newresource> because some of <oldresource> are going to be deleted.
(ADM, 91) Dynamic modification failed: controller <controller> has its Resource attribute set to <resource>, but application named <userapplication> is going to be deleted.
(ADM, 95) Cannot retrieve information about command line used when starting RMS. Start on remote host must be skipped. Please start RMS manually on remote hosts.
(ADM, 96) Remote startup of RMS failed <startupcommand>.Reason: errorreason.
(ADM, 98) Dynamic modification failed: controller <controller> has its Resource attribute set to <resource>, but some of the controlled applications from this list do not exist.
(ADM, 99) Dynamic modification failed: cannot change attribute Resource of the controller object <controller> from <oldresource> to <newresource> because one or more of the applications listed in <newresource> is not an existing application or its state is incompatible with the state of the controller, or because the list contains duplicate elements.
(ADM, 100) Dynamic modification failed: because a controller <controller> has AutoRecover set to 1, its controlled application <userapplication> cannot have PreserveState set to 0 or AutoSwitchOver set to 1.
(ADM, 106) The total number of SysNodes specified in the configuration for this cluster is hosts. This exceeds the maximum allowable number of SysNodes in a cluster which is maxhosts.
(ADM, 107) The cumulative length of the SysNode names specified in the configuration for the userApplication <userapplication> is length. This exceeds the maximum allowable length which is maxlength.
(ADM, 125) Dynamic modification failed: The <attr> entry <value> for SysNode <sysnode> matches the <attr> entry or the SysNode name for another SysNode.
(ADM, 126) userapplication: This application is controlled by controller object.  That controller is defined as a LOCAL controller and as such switching this application must be done by switching the controlling application userapplication
(ADM, 128) <resource> is neither a userApplication nor a resource object, switch request skipped!
(BAS, 2) Duplicate line in hvgdstartup.
(BAS, 3) No kind specified in hvgdstartup.
(BAS, 6) DetectorStartScript for kind <kind> cannot be redefined while detector is running.
(BAS, 9) ERROR IN CONFIGURATION FILE: message.
(BAS, 14) ERROR IN CONFIGURATION FILE:The object <object> belongs to more than one userApplication, userapplication1 and userapplication2.Objects must be children of one and only one userApplication object.
(BAS, 15) ERROR IN CONFIGURATION FILE:The object <object> is a leaf object and this type <type> does not have a detector.Leaf objects must have detectors.
(BAS, 16) ERROR IN CONFIGURATION FILE:The object object has an empty DeviceName attribute.This object uses a detector and therefore it needs a valid DeviceName attribute.
(BAS, 17) ERROR IN CONFIGURATION FILE:The rName is <rname>, its length length is larger than max length maxlength.
(BAS, 18) ERROR IN CONFIGURATION FILE:The duplicate line number is <linenumber>.
(BAS, 19) ERROR IN CONFIGURATION FILE:The NoKindSpecifiedForGdet is <kind>, so no kind specified in hvgdstartup.
(BAS, 23) ERROR IN CONFIGURATION FILE: DetectorStartScript for object object is not defined. Objects of type type should have a valid DetectorStartScript attribute.
(BAS, 24) ERROR IN CONFIGURATION FILE: The object object has an invalid rKind attribute. Objects of type gResource must have a valid rKind attribute.
(BAS, 25) ERROR IN CONFIGURATION FILE:The object object has a ScriptTimeout value that is less than its detector report time.This will cause a script timeout error to be reported before the detector can report the state of the resource.Increase the ScriptTimeout value for object (currently seconds seconds) to be greater than the detector cycle time (currently detectorcycletime seconds).
(BAS, 26) ERROR IN CONFIGURATION FILE:The type of object <object> cannot be 'or' and 'and' at the same time.
(BAS, 27) ERROR IN CONFIGURATION FILE:object <object> is of type 'and', its state is online, but not all children are online.
(BAS, 29) ERROR IN CONFIGURATION FILE:object <object> cannot have its HostName attribute set since it is not a child of any userApplication.
(BAS, 30) ERROR IN CONFIGURATION FILE:The object object has both attributes "LieOffline" and "ClusterExclusive" set.These attributes are incompatible; only one of them may be used.
(BAS, 31) ERROR IN CONFIGURATION FILE:Failed to load a detector of kind <kind>.
(BAS, 32) ERROR IN CONFIGURATION FILE:Object <object> has no detector while all its children's <MonitorOnly> attributes are set to 1.
(BAS, 36) ERROR IN CONFIGURATION FILE:The object object has both attributes "MonitorOnly" and "ClusterExclusive" set. These attributes are incompatible; only one of them may be used.
(BAS, 43) ERROR IN CONFIGURATION FILE: The object object has both attributes "MonitorOnly" and "NonCritical" set. These attributes are incompatible; only one of them may be used.
(BM, 13) no symbol for object <object> in .inp file, line = linenumber.
(BM, 14) Local queue is empty on read directive in line:linenumber.
(BM, 15) destination object <object> is absent in line: linenumber.
(BM, 16) sender object <object> is absent in line:linenumber.
(BM, 17) Dynamic modification failed: line linenumber, cannot build an object of unknown type <symbol>.
(BM, 18) Dynamic modification failed: line linenumber, cannot set value for attribute <attribute> since object <object> does not exist.
(BM, 19) Dynamic modification failed: line linenumber, cannot modify attribute <attribute> of object <object> with value <value>.
(BM, 20) Dynamic modification failed: line linenumber, cannot build object <object> because its type <symbol> is not a user type.
(BM, 21) Dynamic modification failed: cannot delete object <object> because its type <symbol> is not a user type.
(BM, 23) Dynamic modification failed: The <Follow> attribute for controller <controller> is set to 1, but the content of a PriorityList of the controlled application <controlleduserApplication> is different from the content of the PriorityList of the application <userapplication> to which <controller> belongs.
(BM, 24) Dynamic modification failed: some resource(s) supposed to come standby failed.
(BM, 25) Dynamic modification failed: standby capable controller <controller> cannot control application <userapplication> which has no standby capable resources on host <sysnode>.
(BM, 26) Dynamic modification failed: controller <controller> cannot have attributes StandbyCapable and IgnoreStandbyRequest both set to 0.
(BM, 29) Dynamic modification failed: controller object <controller> cannot have its attribute 'Follow' set to 1 while one of OnlineTimeout or StandbyTimeout is not null.
(BM, 42) Dynamic modification failed: application <userapplication> is not controlled by any controller, but has one of its attributes ControlledSwitch or ControlledShutdown set to 1.
(BM, 46) Dynamic modification failed: cannot modify a global attribute <attribute> locally on host <hostname>.
(BM, 54) The RMS-CF-CIP mapping cannot be determined for any host due to the CIP configuration file <configfilename> missing entries.Please verify all entries in <configfilename> are correct and that CF and CIP are fully configured.
(BM, 59) Error errno while reading line <linenumber> of .dob file -- <errorreason>.
(BM, 68) Cannot get message queue parameters using sysdef, errno = <errno>, reason: <reason>.
(BM, 71) Dynamic modification failed: Controller <controller> has its attribute Follow set to 1. Therefore, its attribute IndependentSwitch must be set to 0, and its controlled application <application> must have attributes    AutoSwitchOver == "No"    StandbyTransitions="No"    AutoStartUp=0    ControlledSwitch = 1    ControlledShutdown = 1    PartialCluster = 0.However, the real values are     IndependentSwitch = <isw>    AutoSwitchOver = <asw>    StandbyTransitions = <str>    AutoStartUp = <asu>    ControlledSwitch = <csw>    ControlledShutdown = <css>    PartialCluster = <pcl>.
(BM, 72) Dynamic modification failed: Controller <controller> with the <Follow> attribute set to 1 belongs to an application <application> which PersistentFault is <appfault>, while its controlled application <controlledapplication> has its PersistentFault <_fault>.
(BM, 73) The RMS-CF interface is inconsistent and will require operator intervention. The routine "routine" failed with error code errocode - "errorreason".
(BM, 74) The attribute DetectorStartScript and hvgdstartup file cannot be used together.The hvgdstartup file is for backward compatibility only and support for it may be withdrawn in future releases.Therefore it is recommended that only the attribute DetectorStartScript be used for setting new configurations.
(BM, 75) Dynamic modification failed: controller <controller> has its attributes SplitRequest, IgnoreOnlineRequest, and IgnoreOfflineRequest set to 1. If SplitRequest is set to 1, then at least one of IgnoreOfflineRequest or IgnoreOnlineRequest must be set to 0.
(BM, 80) Dynamic modification failed: controller <controller> belongs to the application <application> which AutoSwitchOver attribute has "ShutDown" option set, but its controlled application <controlled> has not.
(BM, 81) Dynamic modification failed: local controller attributes such as NullDetector or MonitorOnly cannot be modified during local modification (hvmod -l).
(BM, 90) Dynamic modification failed: The length of object name <object> is length. This is greater than the maximum allowable length name of maxlength.
(BM, 92) Dynamic modification failed: a non-empty value <value> is set to <ApplicationSequence> attribute of a non-scalable controller <controller>.
(BM, 94) Dynamic modification failed: the ApplicationtSequence attribute of a scalable controller <controller> includes application name <hostname>, but this name is absent from the list of controlled applications set to the value of <resource> in the attribute <Resource>.
(BM, 96) Dynamic modification failed: a scalable controller <controller> has its attributes <Follow> set to 1 or <IndependentSwitch> set to 0.
(BM, 97) Dynamic modification failed: controller <controller> attribute <ApplicationSequence> is set to <applicationsequence> which refers to application(s) not present in the configuration.
(BM, 98) Dynamic modification failed: two scalable controllers <controller1> and <controller2> control the same application <application>.
(BM, 99) Dynamic modification failed: controlled application <controlledapp> runs on host <hostname>, but it is controlled by a scalable controller <scontroller> which belongs to an application <controllingapp> that does not run on that host.
(BM, 101) Dynamic modification failed: controlled application <controlledapp> runs on host <hostname>, but it is controlled by a scalable controller <scontroller> which belongs to a controlling application <controllingapp> that does not allow for the controller to run on that host.
(BM, 103) Dynamic modification failed: Controller <controller> has its attribute Follow set to 1 and the controlled application <application> has StandbyCapable resources. Therefore the controller itself must have StandbyCapable set to 1 and IgnoreStandbyRequest must be set to 0.
(BM, 105) Dynamic modification failed: Invalid kind of generic resource specified in DetectorStartScript <script> for object <object>.
(BM, 106) The rKind attribute of object <object> does not match the value of the '-k' flag of its associated detector.
(BM, 107) Illegal different values for rKind attribute in object <object>.
(BM, 108) Dynamic modification failed: Scalable controller <object> cannot have its attribute <SplitRequest> set to 1.
(BM, 109) Dynamic modification failed: Application <application> has its attribute PartialCluster set to 1 or is controlled, directly or indirectly, via a Follow controller that belongs to another application that has its attribute PartialCluster set to 1 -- this application <application> cannot have a cluster exclusive resource <resource>.
(BM, 110) Dynamic modification failed: Application <application> is controlled by a scalable controller <controller>, therefore it cannot have its attribute <ControlledShutdown> set to 1 while its attribute <AutoSwitchOver> includes option <ShutDown>.
(BM, 111) Dynamic modification failed: Line #line is too big.
(BM, 113) Base monitor has reported 'Faulted' for host <Sysnode>.
(BM, 122) getaddrinfo failed, reason: errorreason, errno <errno>. Failed to allocate a socket for "rmshb" port monitoring.
(CML, 11) Option (option) requires an operand.
(CML, 12) Unrecognized option option.
(CML, 17) Incorrect range argument with -l option.
(CML, 18) Log level <loglevel> is too large. The valid range is 1..maxloglevel with the -l option.
(CML, 19) Invalid range <low - high>.Within the '-l' option, the end range value must be larger than the first one.
(CML, 20) Log level must be numeric.
(CML, 21) 0 is an invalid range value. 0 implies all values. If a range is desired, the valid range is 1..maxloglevel with the -l option.
(CRT, 1) FindNextHost: local host not found in priority list of nodename.
(CRT, 2) cannot obtain the NET_SEND_Q queue.
(CRT, 3) Message send failed.
(CRT, 4) object: type Contract retransmit failed: Message Id = messageid    see bmlog for contract details.
(CRT, 5) The contract <crtname> is being dropped because the local host <crthost> has found the host originator <otherhost> in state <state>. That host is expected to be in state Online. Please check the interhost communication channels and make sure that these hosts see each other Online.
(CTL, 1) Controller <controller> will not operate properly since its controlled resource <resource> is not in the configuration.
(CTL, 2) Controller <controller> detected more than one controlled application Online. This has lead to the controller fault. Therefore, all the online controlled application will now be switched offline.
(CUP, 2) object: cluster is in inconsistent condition    current online host conflict,    received: host, local: onlinenode.
(CUP, 3) object is already waiting for an event cannot set timer!
(CUP, 5) object received unknown contract.
(CUP, 7) userApplication is locally online, but is also online on another host.
(CUP, 8) object: could not get an agreement about the current online host; cluster may be in an inconsistent condition!
(DET, 1) FAULT REASON: Resource <resource> transitioned to a Faulted state due to a child fault.
(DET, 2) FAULT REASON: Resource <resource> transitioned to a Faulted state due to a detector report.
(DET, 3) FAULT REASON: Resource <resource> transitioned to a Faulted state due to a script failure.
(DET, 4) FAULT REASON: Resource <resource> transitioned to a Faulted state due to a FaultScript failure.This is a double fault.
(DET, 5) FAULT REASON: Resource <resource> transitioned to a Faulted state due to the resource failing to come Offline after running its OfflineScript (offlineScript).
(DET, 6) FAULT REASON: Resource <resource> transitioned to a Faulted state due to the resource failing to come Online after running its OnlineScript (onlinescript).
(DET, 7) FAULT REASON: Resource <resource> transitioned to a Faulted state due to the resource unexpectedly becoming Offline.
(DET, 11) DETECTOR STARTUP FAILED: Corrupted command line <commandline>.
(DET, 12) DETECTOR STARTUP FAILED <detector>.REASON: errorreason.
(DET, 13) Failed to execute script <script>.
(DET, 24) FAULT REASON: Resource <resource> transitioned to a Faulted state due to the resource failing to come Standby after running its OnlineScript (onlinescript).
(DET, 26) FAULT REASON: Resource <resource> transitioned to a Faulted state due to the resource failing to come Online.
(DET, 28) <object>: CalculateState() was invoked for a non-local object! This must never happen. Check for possible configuration errors!
(DET, 33) DETECTOR STARTUP FAILED: Restart count exceeded.
(DET, 34) No heartbeat has been received from the detector with pid <pid>, <startupcommand>, during the last <seconds> seconds. The base monitor will send the process a SIGALRM to interrupt the detector if it is currently stalled waiting for the alarm.
(GEN, 1) Usage: command -t time_interval -k kind [-d]
(GEN, 3) Cannot open command log file.
(GEN, 4) failed to create mutex: directory
(GEN, 5) command: failed to get information about RMS base monitor bm!
(GEN, 7) command: failed to lock virtual memory pages, errno = value, reason: reason.
(INI, 1) Cannot open file dumpfile, errno = errno: explanation.
(INI, 9) Cannot close file dumpfile, errno = errno: explanation.
(MIS, 1) No space for object.
(QUE, 13) RCP fail: filename is being copied.
(QUE, 14) RCP fail: fwrite errno errno.
(SCR, 8) Invalid script termination for controller <controller>.
(SCR, 9) REASON: failed to execute script <script> with resource <resource>: errorreason.
(SCR, 20) The attempt to shut down the cluster host host has failed: errorreason.
(SCR, 21) Failed to execute the script <script>, errno = <errno>, error reason: <errorreason>.
(SCR, 26) The sdtool notification script has failed with status status after dynamic modification.
(SWT, 4) object is online locally, but is also online on onlinenode.
(SWT, 20) Could not remove host <hostname> from local priority list.
(SWT, 25) objectname: outstanding switch request of dead host was denied; cluster may be in an inconsistent condition!
(SWT, 26) object: dead host <hostname> was holding an unknown lock. Lock will be skipped!
(SWT, 45) hvshut aborted because of a busy uap <userapplication>.
(SWT, 46) hvshut aborted because modification is in progress.
(SWT, 84) The userApplication application is in an Inconsistent state on multiple hosts.  hvswitch cannot be processed until this situation is resolved by bringing the userApplication Offline on all hosts - use hvutil -f application to achieve this.
(SYS, 1) Error on SysNode: object. It failed to send the kill success message to the cluster host: host.
(SYS, 8) RMS failed to shut down the host host via a Shutdown Facility, no further kill functionality is available. The cluster is now hung. An operator intervention is required.
(SYS, 13) Since this host <hostname> has been online for no more than time seconds, and due to the previous error, it will shut down now.
(SYS, 14) Neither automatic nor manual switchover will be possible on this host until <detector> detector will report offline or faulted.
(SYS, 15) The uname() system call returned with Error. RMS will be unable to verify the compliance of the RMS naming convention!
(SYS, 17) The RMS internal SysNode name "sysnode" is ambiguous with the name "name". Please adjust names compliant with the RMS naming convention "SysNode = `uname -n`RMS"
(SYS, 48) Remote host <hostname> replied the checksum <remotechecksum> which is different from the local checksum <localchecksum>. The sysnode of this host will not be brought online.
(SYS, 49) Since this host <hostname> has been online for more than time seconds, and due to the previous error, it will remain online, but neither automatic nor manual switchover will be possible on this host until <detector> detector will report offline or faulted.
(SYS, 50) Since this host <hostname> has been online for no more than time seconds, and due to the previous error, it will shut down now.
(SYS, 84) Request <hvshut -a> timed out. RMS will now terminate! Note: some cluster hosts may still be online!
(SYS, 90) hostname internal WaitList addition failure! Cannot set timer for delayed detector report action!
(SYS, 93) The cluster host nodename is not in the Wait state. The hvutil command request failed!
(SYS, 94) The last detector report for the cluster host hostname is not online. The hvutil command request failed!
(SYS, 97) Cannot access the NET_SEND_Q queue.
(SYS, 98) Message send failed in SendJoinOk.
(SYS, 100) The value of the attribute <attr> specified for SysNode <sysnode> is <invalidvalue> which is invalid. Ensure that the entry for <attr> is resolvable to a valid address.
(SYS, 101): Unable to start RMS on the remote SysNode <SysNode> using cfsh, rsh or ssh.
(UAP, 1) Request to go online will not be granted for application <userapplication> since the host <sysnode> runs a different RMS configuration.
(UAP, 5) object: cmp_Prio: list.
(UAP, 6) Could not add new entry to priority list.
(UAP, 7) Could not remove entries from priority list.
(UAP, 8) object: cpy_Prio failed, source list corrupted.
(UAP, 9) object: Update of PriorityList failed, cluster may be in inconsistent condition.
(UAP, 15) sysnode: PrepareStandAloneContract() processing unknown contract.
(UAP, 16) object::SendUAppLockContract: local host doesn't hold a lock -- Contract processing denied.
(UAP, 19) object::SendUAppLockContract: LOCK Contract cannot be sent.
(UAP, 21) object::SendUAppUnLockContract: UNLOCK Contract cannot be sent.
(UAP, 22) object unlock processing failed, cluster may be in an inconsistent condition!
(UAP, 23) object failed to process UNLOCK contract.
(UAP, 24) Deleting of local contractUAP object failed, cannot find object.
(UAP, 27) object received a DEACT contract in state: state.
(UAP, 28) object failed to update the priority list. Cluster may be in an inconsistent state.
(UAP, 29) object: contract data section is corrupted.
(UAP, 32) object received unknown contract.
(UAP, 33) object unknown task in list of outstanding contracts.
(UAP, 35) object: inconsistency occurred. Any further switch request will be denied (except forced requests). Clear inconsistency before invoking further actions!
(UAP, 41) cannot open file filename. Last Online Host for userApplication cannot be stored into non-volatile device.
(UAP, 42) found incorrect entry in status file:<entry>
(UAP, 43) <object>: could not insert <host> into local priority list.
(UAP, 44) <object>: could not remove <host> from local priority list.
(UAP, 45) <object>: could not remove <host> from priority list.
(UAP, 51) Failed to execute the fcntl system call to flags the file descriptor flags for file filename: errno = <errornumber>: <errortext>.
(US, 5) The cluster host hostname is no longer reachable! Please check the status of the host and the Ethernet connection.
(US, 6) RMS has died unexpectedly on the cluster host hostname!
(US, 31) FAULT REASON: Resource resource transitioned to a Faulted state due to a detector report.
(WLT, 1) FAULT REASON: Resource resource's script (scriptexecd) has exceeded the ScriptTimeout of timeout seconds.
(WLT, 3) Cluster host hostname's Shutdown Facility invoked via (script) has not finished in the last time seconds. An operator intervention is required!
(WLT, 5) CONTROLLER FAULT: Controller <object> has propagated <request> request to its controlled application(s) <applications>, but the request has not been completed within the period of <timeout> seconds.
(WLT, 9) sdtool notification timed out after <timeout> seconds.
(WRP, 1) Failed to set script to TS.
(WRP, 2) Illegal flag for process wrapper creation.
(WRP, 3) Failed to execv: command.
(WRP, 4) Failed to create a process: command.
(WRP, 5) No handler for this signal event <signal>.
(WRP, 6) Cannot find process (pid=processid) in the process wrappers.
(WRP, 7) getservbyname failed for service name: servicename.
(WRP, 8) gethostbyname failed for remote host: host.
(WRP, 9) Socket open failed.
(WRP, 12) Failed to bind port to socket.
(WRP, 13) Cannot allocate memory, errno <errno> - strerrno.
(WRP, 14) No available slot to create a new host instance.
(WRP, 15) gethostbyname(hostname): host name should be in /etc/hosts
(WRP, 16) No available slot for host hostname
(WRP, 17) Size of integer or IP address is not 4-bytes
(WRP, 18) Not enough memory in <processinfo>
(WRP, 23) The child process <cmd> with pid <pid> could not be killed due to errno <errno>, reason: reason.
(WRP, 24) Unknown flag option set for 'killChild'.
(WRP, 25) Child process <cmd> with pid <pid> has exceeded its timeout period. Will attempt to kill the child process.
(WRP, 29) RMS on the local host has received a message from host host, but the local host is unable to resolve the sending host`s address. This could be due to a misconfiguration. This message will be dropped. Further such messages will appear in the switchlog.
(WRP, 30) RMS on the local host has received a message from host host, but the local host is unable to resolve the sending host`s address. This message will be dropped. Please check for any misconfiguration.
(WRP, 31) RMS has received a message from host host with IP address receivedip. The local host has calculated the IP address of that host to be calcip. This may be due to a misconfiguration in /etc/hosts. Further such messages will appear in the switchlog.
(WRP, 32) RMS has received a message from host host with IP address receivedip. The local host has calculated the IP address of that host to be calcip. This may be due to a misconfiguration in /etc/hosts.
(WRP, 33) Error while creating a message queue with the key <id>, errno = <errno>, explanation: <explanation>.
(WRP, 34) Cluster host host is no longer in time sync with local node. Sane operation of RMS can no longer be guaranteed. Further out-of-sync messages will appear in the syslog.
(WRP, 35) Cluster host host is no longer in time sync with local node. Sane operation of RMS can no longer be guaranteed.
(WRP, 52) The operation func failed with error code errorcode.
(WRP, 60) The elm heartbeat detects that the cluster host <hostname> has become offline.
(WRP, 68) Unable to update the RMS lock file, function <function>, errno <errno> - errorreason.
(WRP, 69) function failed, reason: errorreason, errno <errno>.
(WRP, 71) Both IPv4 and IPv6 addresses are assigned to <SysNode> in /etc/hosts.
(ADC, 16) Because some of the global environment variables were not set in hvenv file, RMS cannot start up. Shutting down.
(ADC, 21) Because some of the local environment variables were not set in hvenv file, RMS cannot start up. Shutting down.
(ADC, 69) RMS will not start up - previous errors opening file.
(ADC, 73) The environment variable <hvenv> has value <value> which is out of range.
(ADM, 1) cannot open admin queue
(ADM, 2) RMS will not start up - errors in configuration file
(BM, 3) Usage: progname [-c config_file] [-m] [-h time] [-l level] [-n]
(BM, 49) Failure calculating configuration checksum
(BM, 51) The RMS-CF interface is inconsistent and will require operator intervention. The routine "routine" failed with errno errno - "error_reason"
(BM, 58) Not enough memory -- RMS cannot continue its operations and is shutting down
(BM, 67) An error occurred while writing out the RMS configuration after dynamic modification. RMS is shutting down.
(BM, 69) Some of the OS message queue parameters msgmax= <msgmax>, msgmnb= <msgmnb>, msgmni=<msgmni>, msgtql=<msgtql> are below lower bounds <hvmsgmax>, <hvmsgmnb>, <hvmsgmni>, <hvmsgtql>. RMS is shutting down.
(BM, 89) The SysNode length is length. This is greater than the maximum allowable length of maxlength. RMS will now shut down.
(BM, 116) The RMS-CF interface is inconsistent and will require operator intervention. The CF layer is not yet initialized.
(BM, 117) The RMS-CIP interface state on the local node cannot be determined due to error in popen() -- errno = errornumber: errortext.
(BM, 118) The RMS-CIP interface state on the local node is required to be "UP", the current state is state.
(CML, 14) ###ERROR: Unable to find or Invalid configuration file.########CONFIGURATION MONITOR exits !!!!!######
(CMM, 1) Error establishing outbound network communication
(CMM, 2) Error establishing inbound network communication
(CRT, 6) Fatal system error in RMS. RMS will shut down now. Please check the bmlog for SysNode information.
(DET, 8) Failed to create DET_REP_Q
(DET, 9) Message send failed in detector request Q: queue
(DET, 16) Cannot create gdet queue of kind gkind
(DET, 18) Error reading hvgdstartup file. Error message: errorreason.
(INI, 4) InitScript does not have execute permission.
(INI, 7) sysnode must be in your configuration file
(INI, 10) InitScript has not completed within the allocated time period of timeout seconds.
(INI, 11) InitScript failed to start up, errno errno, reason: reason.
(INI, 12) InitScript returned non-zero exit code exitcode.
(INI, 13) InitScript has been stopped.
(INI, 14) InitScript has been abnormally terminated.
(INI, 17) Controller controller refers to an unknown userApplication <userapplication>
(INI, 18) Configuration uses objects of type "controller" and of type "gcontroller". These object types are mutually exclusive!
(INI, 19) userApplication <childapp> is simultaneously controlled by 2 gcontroller objects <controller1> and <controller2>. This will result in unresolveable conflicts!
(INI, 20) Incorrect configuration of the gcontroller object <controller>! The attributes "Resource" and "ControllerType" are mandatory.
(INI, 21) Incorrect configuration of the gcontroller object <controller>! It has the attribute Local set, but the host list for the controlled application <childapp> does not match the host list for the controlling application <parentapp>.
(MIS, 4) The locks directory directory cannot be cleaned of all old locks files: error at call of file: filename, errno = errnonumber, error -- errortext.
(MIS, 9) The locks directory directory does not exist. An installation error occured or the directory was removed after installation.
(QUE, 1) Error status in ADMIN_Q.
(QUE, 2) Read message failed in ADMIN_Q.
(QUE, 5) Network message read failed.
(QUE, 6) Network problem occurred.
(QUE, 11) Read message failed in DET_REP_Q.
(QUE, 12) Error status in DET_REP_Q: status.
(QUE, 15) Error No errornumber : <errortext> in accessing the message queue.
(SCR, 4) Failed to create a detector request queue for detector detector_name.
(SCR, 5) REQUIRED PROCESS RESTART FAILED: Unable to restart detector. Shutting down RMS.
(SCR, 10) InitScript did not run ok. RMS is being shut down
(SCR, 12) incorrect initialization of RealDetReport; Shutting down RMS.
(SCR, 13) ExecScript: Failed to exec script <script> for object <objectname>: errno errno
(SYS, 33) The RMS cluster host <hostname> does not have a valid entry in the /etc/hosts file. The lookup function gethostbyname failed. Please change the name of the host to a valid /etc/hosts entry and then restart RMS.
(SYS, 52) SysNode sysnode: error creating necessary message queue NODE_REQ_Q...exiting.
(UAP, 36) object: double fault occurred, but Halt attribute is set. RMS will exit immediately in order to allow a failover!
(US, 1) RMS will not start up - fatal errors in configuration file.
(US, 42) A State transition error occured. See the next message for details.
(WRP, 40) The length of the type name specified for the host host is <length> which is greater than the maximum allowable length <maxlength>. RMS will exit now.
(WRP, 44) Not enough slots left in the wrapper data structure to create new entries.
(WRP, 45) The SysNode to the CIP name mapping for <sysnode> has failed.
(WRP, 46) The RMS-CF interface is inconsistent and will require operator intervention. The routine "routine" failed with error code errorcode -"errorreason".
(WRP, 47) The RMS-CF-CIP mapping cannot be determined for any host as the CIP configuration file <configfilename> cannot be opened. Please verify that all the entries in <configfilename> are correct and that CF and CIP are fully configured.
(WRP, 48) The RMS-CF-CIP mapping cannot be determined for any host as the CIP configuration file <configfilename> has missing entries. Please verify that all the entries in <configfilename> are correct and that CF and CIP are fully configured.
(WRP, 54) The heartbeat mode setting of <hbmode> is wrong. Cannot use ELM heartbeat method on non-CF cluster.
(WRP, 55) The heartbeat mode setting of <hbmode> is wrong. The valid settings are '1' for ELM+UDP and '0' for UDP.
(WRP, 58) The ELM lock resource <resource> for the local host is being held by another node or application.
(WRP, 64) The ELM heartbeat startup failure for the cluster host <hostname>.
(WRP, 67) The RMS-CF-CIP mapping cannot be determined for any host as the CIP configuration file <configfilename> has missing entries. Please verify that all the entries in <configfilename> are correct and that CF and CIP are fully configured.
breakapplicationsX skipping all policy based failover checks, since the intended state "<intended state>" is neither Online nor Standby.
NOTICE: /opt/SMAW/bin/hvawsipalias: Allocation-ID not found on Instance.
NOTICE: /opt/SMAW/bin/hvawsipalias: Associate Elastic IP address with network interface.
NOTICE: /opt/SMAW/bin/hvawsipalias: Associates the DNS record IP address with the domain name.
NOTICE: /opt/SMAW/bin/hvawsipalias: External command executed. detail=<aws_command options>
NOTICE: /opt/SMAW/bin/hvawsipalias: Instance-ID not found on RouteTables.
NOTICE: /opt/SMAW/bin/hvawsipalias: Received SIGTERM signal.
NOTICE: /opt/SMAW/bin/hvawsipalias: Replacing the route of Route Table.
NOTICE: /opt/SMAW/bin/hvawsipalias: Start begin.
NOTICE: /opt/SMAW/bin/hvawsipalias: Start end. Return code=<code>.
NOTICE: /opt/SMAW/bin/hvawsipalias: Stop begin.
NOTICE: /opt/SMAW/bin/hvawsipalias: Stop end. Return code=<code>.
NOTICE: /opt/SMAW/bin/hvawsipalias: The IpAddress <IpAddress> not found on HostZone.
NOTICE: /opt/SMAW/bin/hvawsipalias: The network interface is not active.
NOTICE: /opt/SMAW/bin/hvawsipalias: Waiting until the DNS record status becomes INSYNC.
NOTICE: /opt/SMAW/bin/hvazureipalias: External command executed. detail=<azure_command_options>
NOTICE: /opt/SMAW/bin/hvazureipalias: InstanceIPAddress not found on Route Table.
NOTICE: /opt/SMAW/bin/hvazureipalias: Received SIGTERM signal.
NOTICE: /opt/SMAW/bin/hvazureipalias: Replacing the route of Route Table.
NOTICE: /opt/SMAW/bin/hvazureipalias: Start begin.
NOTICE: /opt/SMAW/bin/hvazureipalias: Start end. Return code=<code>.
NOTICE: /opt/SMAW/bin/hvazureipalias: Stop begin.
NOTICE: /opt/SMAW/bin/hvazureipalias: Stop end. Return code=<code>.
NOTICE: /opt/SMAW/bin/hvazureipalias: The combination of Instance and its IP address is invalid. InstanceIPAddress=<VirtualMachineIPAddress>, InstanceID=<ResourceID>
NOTICE: /opt/SMAW/bin/hvscsireserve clear called
NOTICE: /opt/SMAW/bin/hvscsireserve preempt-abort succeeded. prekey is <prekey>
NOTICE: /opt/SMAW/bin/hvscsireserve register failed
NOTICE: /opt/SMAW/bin/hvscsireserve register failed. But ignore
NOTICE: /opt/SMAW/bin/hvscsireserve register succeeded
NOTICE: /opt/SMAW/bin/hvscsireserve release failed. But ignore
NOTICE: /opt/SMAW/bin/hvscsireserve release failed. Return code <code>
NOTICE: /opt/SMAW/bin/hvscsireserve release succeeded
NOTICE: /opt/SMAW/bin/hvscsireserve reserve failed. But ignore. key is <key>
NOTICE: /opt/SMAW/bin/hvscsireserve reserve succeeded
NOTICE: /opt/SMAW/bin/hvsgpr: AppState is Standby
NOTICE: /opt/SMAW/bin/hvsgpr: Fault begin
NOTICE: /opt/SMAW/bin/hvsgpr: Fault end. Return code <code>
NOTICE: /opt/SMAW/bin/hvsgpr: IP advertising began
NOTICE: /opt/SMAW/bin/hvsgpr: IP advertising ended. Return code <code>
NOTICE: /opt/SMAW/bin/hvsgpr: OfflineDone begin
NOTICE: /opt/SMAW/bin/hvsgpr: OfflineDone end. Return code <code>
NOTICE: /opt/SMAW/bin/hvsgpr: PreOnline begin
NOTICE: /opt/SMAW/bin/hvsgpr: PreOnline end. Return code <code>
NOTICE: /opt/SMAW/bin/hvsgpr: Start begin
NOTICE: /opt/SMAW/bin/hvsgpr: Start end. Return code <code>
NOTICE: /opt/SMAW/bin/hvsgpr: Stop begin
NOTICE: /opt/SMAW/bin/hvsgpr: Stop end. Return code <code>
NOTICE About to configure <Interface> ...
NOTICE About to configure <Interface> with <IpAddress> <Netmask> ...
NOTICE About to configure <MountPoint> ...
NOTICE About to configure <ZpoolName> ...
NOTICE: About to configure zone <zone name> …
NOTICE: About to export <MountPoint>.  ...
NOTICE About to export zpool <ZpoolName> ...
NOTICE About to import zpool <ZpoolName> ...
NOTICE: About to switch <app> <state>
NOTICE About to unconfigure <Interface> ...
NOTICE About to unconfigure <InterfaceName> ...
NOTICE About to unconfigure <Interface> prior to re-configuring.
NOTICE: About to unconfigure <MountPoint>
NOTICE: About to unexport <MountPoint>
NOTICE: About to unshare <MountPoint> ...
NOTICE: access to <Mount> failed.
NOTICE: access to <Mount> succeeded once again
NOTICE Acquire <NfsDirName> by moving to <NfsDirName>.RMS.moved.
NOTICE: A forceable attempt will be made to bring <application> out of maintenance mode with "hvutil -m forceoff <application>" ...
NOTICE: A hung umount command for <BlockSpec>  is already running
NOTICE: Alarm clock rang!
NOTICE: Alarm clock set to <value> ...
NOTICE: All hosts for <app> are not Online or are being shut down, so there is no need to wait for it.
NOTICE: Already reserved
NOTICE: An attempt will be made to bring <application> out of maintenance mode with "hvutil -m off <application>" ...
NOTICE: <app> Faulted on all hosts!
NOTICE: <app> Faulted on <host>
NOTICE: <app> is already Offline on all hosts -- no action required
NOTICE: <app> is already <state> on <host> -- no switch required
NOTICE: <app> is busy, on <host>, waiting ...
NOTICE: <app> is in the Unknown state on <host>, waiting ...
NOTICE: <app> is maintenance mode, skipping <x> ...
NOTICE: <app> is not yet coming <state> anywhere, re-executing necessary command ...
NOTICE: <app> is not yet <state> anywhere, waiting ...
NOTICE: <app> is not yet <state> everywhere, waiting ...
NOTICE: <app> is Online on <host>
NOTICE: <app> is Standby on <host>
NOTICE: <application> is (going) in maintenance mode, ignoring it ...
NOTICE: <application> is going Online and is beyond the PreCheckScript stage and therefore has priority, so exit with error here ...
NOTICE: <application> is in Wait and <resource> is Offline, so <application> is in PreCheckScript or Standby processing
NOTICE: <application> is Online and has priority, so exit with error here ...
NOTICE: <application> LicenseToKillWait=yes, so may wait a total of approximately <lower priority sleep value> seconds, if necessary, to ensure any higher priority application starts its Online processing first ...
NOTICE: Application specific entry at line <LineNo> overloads the generic entry at line <LineNo> in <Fstab> for mp <MountPoint>.
NOTICE: ApplicationSequence=<xxx>
NOTICE: <application> within the same set <priority set> with AppPriority=<priority>
NOTICE: <application> within same set with AppPriority=<priority>
NOTICE: <app> LicenseToKill=<xxx> KillPrioritySet=<xx> KillPriority=<xxx>
NOTICE: <app> on <host>: <state>, skipping Standby request
NOTICE: Begin to kill the IP advertising process
NOTICE: Break applications due to <application> coming <intended state> with BreakValue=<break value> KillPriority=<kill priority>
NOTICE: Break applications due to <application> coming up.
NOTICE: Breaking <application> after hvswitch due to <application> coming up.
NOTICE: Breaking <application> due to <application> coming up.
NOTICE: Breaking <application> with Autobreak=yes due to <application> coming up.
NOTICE: Cannot allocate memory for zfs entry, current size = <Size> for <Number> entry. Cannot detect the zpool properly.
NOTICE: cannot get the address of host <Host> from the hosts file
NOTICE: cannot get the address of interface <Address> from the hosts file!
NOTICE: cannot grab mount lock for dostat() check_getbdev(), returning previous state
NOTICE: cannot grab mount lock for dostat() <X>, returning previous state
NOTICE: Cannot read 4 fields in sharetab entry. ignore this line.
NOTICE: Cannot read <Sharetab>.
NOTICE: cannot send context->socket: <Socket>, (<ErrorMsg>)
NOTICE: cannot unlock mount lock for dostat() <X>
NOTICE cf host <Host> found in <IpConfFile>
NOTICE cf host <Host> not found in <IpConfFile>, researching with uname value
NOTICE: Check completed successfully. file=<config>
NOTICE CheckForTrustedHosts -a <Address> -r <Resource> dummy ...
NOTICE CheckForTrustedHosts could not determine ResourceName, skipping ping check
NOTICE Check if <Interface> is both UP and RUNNING ..
NOTICE: Checking if <app> is <state> everywhere ...
NOTICE: Checking if <app> is <state> somewhere ...
NOTICE Checking <NfsLock>
NOTICE Check link status by using the link detection tools.
NOTICE: checkReservation <disk> <reservation_key> <option> called. Return code <code>
NOTICE: Child processes of <process id list> : <process id list>
NOTICE: child process=<pid>  still running
NOTICE: Clearing <application> due to <application> coming up.
NOTICE: <command> <app>
NOTICE: command <Command> timed out, returning <State>
NOTICE: "<command>" exited with code <state>
NOTICE: command <pid> (<command>) still running, but has exceeded the timeout value <timeout> for reporting a state to RMS, with the previous state unkown, so returning offline
NOTICE: command <pid> (<command>) still running, but has exceeded the timeout value <timeout>, returning the previous state <state>
NOTICE: <command> stopping further processing since the NoProceed option has been specified
NOTICE: Command underlying file system type for <mount> is <BlkidType>
NOTICE Computed broadcast address <Bcast>
NOTICE Configuring interface <Interface> with <Ipaddress> <Netmask> <Broadcast>
NOTICE Create <Interface> as <IpAddress> <Options>.
NOTICE Create new <NfsDirName>
NOTICE Creating another mount point to re-establish the connection.
NOTICE: cycle time has been reset to <Value>
NOTICE: dd if=<CharSpec> of=/dev/null count=1025 failed, try again
NOTICE Deconfiguring interface <Interface>
NOTICE Deconfiguring of interface <Interface> failed <errorcode>.
NOTICE Deconfiguring of interface <Interface> failed.
NOTICE: delete the old arp entry for <host>
NOTICE: Determining child processes of <process id list> ...
NOTICE Directory <MountPoint> does not exist and is generated now.
NOTICE Directory <NfsDirName> has files in it.
NOTICE Directory <NfsDirName>.RMS.moved aready present.
NOTICE Directory <NfsDirName>.RMS.moved found and has files in it.
NOTICE: disk is <disk>. ReservationKey is <reservation_key>
NOTICE: disk is <disk>. ReservationKey is <reservation_key>. opt is <option>
NOTICE: Doing i/o on <MountPoint>
NOTICE: doopenread of mount-device (pid xxxx),counter=x not done yet reporting status, waiting ...
NOTICE: dostat of mount-point (pid xxxx),counter=x not done yet reporting status, waiting ...
NOTICE: <enable/disable> resource detection for <resource>
NOTICE: end of dopopen wait
NOTICE ensure base interface <Interface> is working ...
NOTICE: ExecuteCommandForApp missing arguments: App=<app> Command=<command>
NOTICE: Exiting successfully.
NOTICE: failed to get mpinfo for <MountPoint>. cannot open <Fstab> (<Errno>)
NOTICE: failed to open device "<tempfile>", (<errormsg>)
NOTICE: failed to open/read device <Mount> (<ErrMsg>)
NOTICE: failed to write tempfile "<File>" within <Maxretry> try/tries
NOTICE: File <File> does not exist or permission denied.(<Errno>)
NOTICE: File system type for <CharSpec> is <Type>, using <FsckCommand>
NOTICE: <Filesystem> was not mounted at <MountPoint>
NOTICE Flushing device <RawDevice>
NOTICE Following processes will be killed:
NOTICE forced umount <MountPoint> done.
NOTICE: fork() failed for "<command>" with <errno>
NOTICE: Found a read only flag in <MountOptions>, setting context->rwflag to O_RDONLY
NOTICE: Found a read only flag in <MountOptions>, setting readonly attribute to 1
NOTICE: found read only option: <MountOptions>
NOTICE: Found the legacy mount point <MountPoint> in vfstab[.pcl] and resolve zpool name <ZpoolName>. Start monitoring of this resource.
NOTICE Fuser: fuser command failed with error code <RetCode>
NOTICE Fuser: killing active processes on <MountPoint> : <Pids>
NOTICE <FuserLsof> <FLOption> -s <NfsServer> <MountPoint>. ...
NOTICE FuserLsofPid is <Pid>
NOTICE: Fuser <MountPoint> ...
NOTICE Fuser: No processes active in <MountPoint>
NOTICE: FuserPid is <FuserPid>
NOTICE: GdStop cannot find the context <Value> in the context list for <String>.
NOTICE: getting block device for <Mount> failed
NOTICE Got <NfsLock>
NOTICE: Hosts for <app> are <host>
NOTICE: hvappsequence complete
NOTICE hvcheckinterface <interface> ...
NOTICE hvcheckinterface <interface> Ok, returning ...
NOTICE hv_nfs.client: sending <signal> to nfsd ...
NOTICE hv_nfs.client: MaxCount <MaxCount> reached, stop_nfsd_proc killing nfsd daemons ...
NOTICE hv_nfs.client start nfsd <argument> starting ...
NOTICE hv_nfs.client: <pid> still running ...
NOTICE <hv_nfs-c> <MountPoint> already has NFS filesystem mounted on it.
NOTICE <hv_nfs-c/u> <MountPoint> is a symbolic name and its target already has NFS filesystem mounted on it.
NOTICE: hvnfsrestart: <application> is <state>, no need to continue for wait for aliases to be <state> …
NOTICE: hvnop -m UApp_ReqStandby -s <host> <app>
NOTICE: hvutil -m off <application> failed with error code <return value> (<error output>).
NOTICE ifconfig <Interface> <IfConfig> 2>/dev/null ...
NOTICE Ignore Acquire <NfsDirName> by move. re-try mount.
NOTICE: interface <Address> is reconfigured, old is <Interface1>, new is <Interface2>
NOTICE: interface check for <Interface>  failed
NOTICE: interface check for <Interface> failed, lying online for <Count> seconds
NOTICE: interface check for <Interface> failed, ping scheme skipped
NOTICE: interface check for <Interface> succeeded
NOTICE <Interface> (<IpAddress>) was successfully configured and is working.
NOTICE <Interface> (<IpAddress>) was successfully unconfigured.
NOTICE <Interface> cannot be configured, unconfiguring.
NOTICE: <Interface> for host <Host> bound to  <Interface1> (<Address>)
NOTICE <Interface> is already configured.
NOTICE <Interface> is already unconfigured.
NOTICE <Interface> is not UP, better initialize it ...
NOTICE <Interface> is not UP, better initialize it to address 0.0.0.0.
NOTICE <Interface> wanted on <WantedInterfaces> is already configured on <Interface> and pings successfully.
NOTICE <Interface> was already configured and running. Re-configure it.
NOTICE <Interface> was already configured and running. Use it without re-configuration.
NOTICE <IpAddress> cannot be configured on <Interface>
NOTICE: ip addr del <ipaddrprefix> failed (<errorcode>).
NOTICE: IP advertising ended because <CF nodename> is UP
NOTICE: IP advertising ended because the Offline processing of <application> began
NOTICE: IP advertising ended because IP advertising was executed <count> times
NOTICE: ip link set dev <interface> down failed (<errorcode>).
NOTICE: key is <key>. opt is <option>. disk is <disk>. Return code <code>
NOTICE "KillFuserLsof: kill fuser pid <FuserLsofPid>"
NOTICE "KillFuserLsof: No fuser process running"
NOTICE "KillFuserLsof: pkill -P <FuserLsofPid>"
NOTICE: KillFuser: kill fuser pid <FuserPid>
NOTICE: KillFuser: No fuser process running
NOTICE: KillFuser: pkill -P <FuserPid>
NOTICE: killing parent processes <process list> ...
NOTICE: Killing <process id list> ...
NOTICE: KillPriority=<kill priority>
NOTICE: kill selfpanic process. pid is <pid>
NOTICE LABEL/UUID Command failed to ascertain device name for label/uuid!
NOTICE Leave mount point as garbage.
NOTICE LieOffline Enabled.
NOTICE <LInterface> cannot be configured on <Interface>
NOTICE LogAndExit: mkdir -p <MountPoint> ...
NOTICE LogAndExit: rm -f <LockTarget> ...
NOTICE LogAndExit: rm -f <MountPoint> ...
NOTICE LogAndExit: rm -f <NfsLock> ...
NOTICE LogAndExit: rmdir <NfsDirName> ...
NOTICE Look at the definition of MaxAlias in the file hv_ipalias-c.
NOTICE Lsof complete
NOTICE Lsof: GetRealNameOfSymlink <MountPoint> returned no entry, skipping kill of any processes ...
NOTICE Lsof: killing active processes on <MountPoint> : <Pids>
NOTICE lsof -t <MountPoint> ...
NOTICE: lying interval has expired
NOTICE Mac Address for <Interface> was successfully reset to the system specified address.
NOTICE: Memory allocation failed for size <Size> (<Errno>), Cannot read the file <File>
NOTICE: Memory re-allocation failed for size <Size> (<Errno>), return partial read for the file <File>
NOTICE mkdir <NfsDirName>
NOTICE mkdir <NfsDirName> and make symbolic link <MountPoint>
NOTICE: <Mount> done reporting status
NOTICE MountFS nfs <MountOptions> <What> <NfsDirName> for <MountPoint>
NOTICE Mount NFS mountpoint <MountPoint> was successful.
NOTICE <MountPoint> actively used by the processes: <Pids>
NOTICE <MountPoint> is actively used by the processes: <Pids>
NOTICE <MountPoint> is already gone.
NOTICE: <MountPoint> is already mounted, attempt to read data from it.
NOTICE <MountPoint> is currently mounted, attempt to unmount it.
NOTICE: <MountPoint> is mounted and can be accessed.
NOTICE <MountPoint> is mounted successfully.
NOTICE <MountPoint> is not active, Fuser skipping kill of process id(s): <Pids>
NOTICE <MountPoint> is not active, Lsof skipping kill of process id(s): <Pids>
NOTICE <MountPoint> is symbolic link and is removed now.
NOTICE: mount point <Mount> has a problem, lying  <PrevState> for <MaxLieOfflineTime> seconds
NOTICE mount point <Mount> is not in /usr/opt/reliant/dev/nfs
NOTICE: mount point <Mount> is not mounted
NOTICE: mount point <mount-point> has a problem, lying online for xxx seconds
NOTICE: mount point <Mount> status cannot yet be ascertained, waiting a maximum of <MaxLieOfflineTime> seconds
NOTICE: mount point <Mount> status was ascertained successfully again
NOTICE Mount point name <MountPoint> is not symbolic link. Found NFS direct mount.
NOTICE <MountPoint> not mounted.
NOTICE mount -t <Type> <Option> <Dev> <MountPoint>
NOTICE Move <NfsDirName> to <NfsDirName>.RMS.moved.
NOTICE Moving <Element> to <Element>.<Date>
NOTICE: Multiple application specific entries found at line <LineNo> in <Fstab> for mp <MountPoint>. Use the previous definition at line <LineNo>.
NOTICE: Multiple entries defined at line <LineNo> in <Fstab> for mp <MountPoint>. Use the previous definition at line <LineNo>.
NOTICE: Native ZFS mountpoint <MountPoint> is not mounted.
NOTICE <NfsDirectMount> removed.
NOTICE <NfsDirectMount>.RMS.moved found.
NOTICE <NfsDirectMount>.RMS.moved has files. Ignoring moving <NfsDirectMount>.RMS.moved to <NfsDirectMount>.
NOTICE <NfsDirectMount>.RMS.moved MOVED TO <NfsDirectMount>.
NOTICE Nfs mount Directory <NfsDirName> failed. Could be mount point is busy.
NOTICE: nfs server is <NfsServer> and nfs server mp is <MountPoint>
NOTICE NFS symbolic link <MountPoint> missing, restoring...
NOTICE: no application has been defined, so no hvassert or lying possible
NOTICE No available Mac Address software was found
NOTICE: No child processes of <process id list> found
NOTICE: No command defined for resource
NOTICE: node <Host> is offline, command <Command> failed
NOTICE: no more sockets (<Errno>)
NOTICE No non-empty paths found that are part of zfs mountpoints.
NOTICE: no ping response received from any host, lying online for <Count> seconds
NOTICE No response from any ping hosts <hosts>. Try once more ...
NOTICE: No Timeout value set -- using 300 seconds
NOTICE Offline processing of <ZpoolName> successful.
NOTICE Online processing of <ZpoolName> successful.
NOTICE: Ok for <application> to start up.
NOTICE: Ok to start up.
NOTICE: opt is <option>. disk is <disk>. Return code <code>
NOTICE: opt is <option>. Return code <code>
NOTICE ping hosts <hosts>
NOTICE ping <NfsServer> ...
NOTICE: ping reply received from <Host>
NOTICE: PreCheckTimeout=<PreCheckTimeout> LowerPrioritySleep=<lower priority sleep value>
NOTICE: priority application <app> is <state> on <host>
NOTICE: PriorityApps=<app>
NOTICE: Processing <application>.
NOTICE: Processing prechecks for application <application>.
NOTICE ProcessRoutingInfo -c <Interface> <RoutingInfo> ...
NOTICE ProcessRoutingInfo -u <Interface> <RoutingInfo> ...
NOTICE: reading from device <CharSpec> failed with error code <RetCode>
NOTICE: reading from device <CharSpec> hung ...
NOTICE: recvfrom error <ErrMsg>
NOTICE: Remove <application> in maintenance mode from the list of lower priority applications, so no waiting for it will occur.
NOTICE Remove empty directory <NfsDirName>.RMS.moved.
NOTICE Remove existing directory <MountPoint>.
NOTICE Remove existing symbolic link <MountPoint>.
NOTICE RemoveInterface deconfiguring <Interface> ...
NOTICE RemoveInterface Interface=<Interface> NoDeconfigure=<NoDeconfigure> Cflag=<Cflag>
NOTICE RemoveInterface not deconfiguring <Interface> ...
NOTICE RemoveInterface resetting mac address for <Interface> ...
NOTICE Remove symbolic link <MountPoint>.
NOTICE: Removing any possible Rawdisk links below reliant/dev/rawdisk...
NOTICE: Removing any possible stale nfs links below reliant/dev/nfs...
NOTICE: Removing stale lock file(s) <lock file list> ...
NOTICE: Reservation is released. counter is <counter>
NOTICE: reservation key was null at the first attempt. Trying again ...
NOTICE: resetting doopenreadcount from <OldValue> to <NewValue>
NOTICE: resetting dopopencount from <OldCount> to <NewCount>
NOTICE: resetting dostatcount from <OldCount> to <NewCount>
NOTICE: resetting dostatcount from <OldValue> to <NewValue>
NOTICE: Resetting Hosts for <app> to <host>
NOTICE: resetting lying time and returning the previous state, since a child process has still not completed.
NOTICE Resetting Mac Address
NOTICE: ReSetting mac Address of base interface <interface> to system defined Mac address failed. <error code>
NOTICE: resource detection for <Resource> has been disabled
NOTICE: resource detection for <Resource> has been reenabled
NOTICE: resource detection has been disabled
NOTICE: resource detection has been enabled
NOTICE: resource detection has been reenabled
NOTICE: resource has become faulted
NOTICE: resource has become offline
NOTICE: resource has become offlinefaulted
NOTICE: resource has become online
NOTICE: resource has become onlinewarning
NOTICE: resource has become standby
NOTICE: <Resource> has become <State>
NOTICE: <Resource> has become <State>. If the resource status became faulted in the middle of the offline/online processing then it was most likely due to the interim state of the resource and should be ignored.
NOTICE: resource has become unknown
NOTICE: RMS already running - nothing to do
NOTICE: RMS is being shut down on <host>.
NOTICE: RMS is not running on <host>, skipping <x> ...
NOTICE: RMS is running - nothing to do
NOTICE: RMS Wizard cleanup successfully terminated.
NOTICE: RMS Wizard rc-script invoked: arguments: <arguments>
NOTICE: rshx - <host> <command> <app>
NOTICE Search and move any existing paths that are part of zfs mountpoints.
NOTICE: Self panic
NOTICE: send standby request for <app> to <host>
NOTICE: server <NfsServer> is not responding
NOTICE: server <NfsServer> is responding once again
NOTICE: Setting takeover mac Address of base interface <interface> from <value> to <value>.
NOTICE: Setting takeover mac Address <value> of base interface <interface> failed. <error code>
NOTICE: Share configuration exists for <mount point> ...
NOTICE: Since the SA_icmp Shutdown Agent is not set, I/O fencing will not perform self-panic of node.
NOTICE: Since the set value of HaltFlag is No, I/O fencing will not perform self-panic of node.
NOTICE: skip waiting for <app> since higher priority applications are already online on all hosts
NOTICE Something strange here. Mount point <MountPoint> is not symbolic nfs but <OldMountType>. Do nothing.
NOTICE start automatic recovery of base address <Interface> ...
NOTICE: starting <command args>.
NOTICE Starting hvcleanupnfs in the background ...
NOTICE: State of <app> on <host>: <state>
NOTICE: State of priority application <app> on <host>: <state>
NOTICE: stat of <Mount> failed
NOTICE: Stopping the processes running on <MountPoint>.
NOTICE: successfully faulted
NOTICE: successfully killed
NOTICE: successfully offlinedone
NOTICE: successfully preonline
NOTICE: successfully started
NOTICE: successfully stopped
NOTICE: Switching <application> due to <application> coming up.
NOTICE: Switching the lower priority application <application> with hvswitch -p, due to <application> coming up.
NOTICE Symbolic link <MountPoint> has been replaced with directory already.
NOTICE Symbolic link <MountPoint> to NFS mountpoint <Entry> was successfully recreated.
NOTICE: sync of the disk <CharSpec> with hdparm -f <CharSpec> succeeded
NOTICE Terminating process : <ZfsFuserPid>
NOTICE: Testing i/o hung on <MountPoint>. Waiting ...
NOTICE: The command "<command>" completed successfully
NOTICE: The command <Command> died unexpectedly with exit code 0x<Value>, status = 0x<Value>.
NOTICE: The command <Command> exited with status <RetCode>. This is fatal error and cannot continue detection.
NOTICE: The command "<command>" has exceeded the allotted time limit <timeout>, returning offline!
NOTICE: The command "<command>" has exceeded the allotted time limit <timeout>, returning <state>! (Previous state is <previous state>)
NOTICE: The command (<Command>) succeeded but output is empty ? This should not happen.
NOTICE: The command <Command> terminated abnormally with status <RetCode>
NOTICE: The command has been timeout : <Command>
NOTICE: The dataset <Dataset> has become a ZFS filesystem in the zpool <ZpoolName>. The old type was <Type>.
NOTICE: The file system <MountPoint> was successfully unmounted.
NOTICE: The generic entry at line <LineNo> in <Fstab> was overloaded by application specfic entry for mp <MountPoint>. Use the previous definition at line <LineNo>.
NOTICE The interface will not be re-configured.
NOTICE: The initial list of ZFS file system from the command execution <Command> is as follows.
NOTICE: The Legacy flag is <Value>
NOTICE: The monitorall is <Value>
NOTICE: The mount point of a ZFS filesystem <ZfsName> has changed from <MountPoint> to <MountPoint>.
NOTICE: The mp name is <MountPoint>
NOTICE: The native mount point <MountPoint> is not correctly shared when it has SHARENFS or NFS property with on. Return faulted.
NOTICE The physical interface <Interface> is already configured, will attempt to configure using a logical interface (alias)
NOTICE: The SHARENFS or NFS property of a ZFS filesystem <ZfsName> has changed from <Sharenfs> to <Sharenfs>.
NOTICE: The status of the zpool, <ZpoolName>, is no longer degraded.
NOTICE: The status of the zpool, <ZpoolName>, is no longer faulted or unavailable.
NOTICE: There is already a PreCheck error, so skipping Processing <applications>
NOTICE: There is an application with higher priority and Faulted on this machine.
NOTICE: There is an application with higher priority in Wait on this machine.
NOTICE: There is an application with higher priority on this machine.
NOTICE There was no interface active for <InterfaceName>.
NOTICE: The Unknown state for <app> will be ignored as it is not defined to run on <host>
NOTICE: The ZFS filesystem <ZfsName> has been added to a zpool <ZpoolName>.
NOTICE: The ZFS filesystem <ZfsName> has been deleted from a zpool <ZpoolName>.
NOTICE: The ZFS filesystem <ZfsName> is no longer ZFS file system type in a zpool <ZpoolName>. The new type is <Type>.
NOTICE: The zpool list <ZpoolName> output shows that the zpool is degraded. Immediate attention is required.
NOTICE: The zpool list <ZpoolName> output shows that the zpool is faulted. Report faulted.
NOTICE: The zpool name is <ZpoolName>
NOTICE: The zpool <ZpoolName> does not have AltRoot defined. The resource becomes faulted in order to prevent the potential data corruption.
NOTICE The zpool <ZpoolName> has Health=<Zhealth> and AltRoot=<Zaltroot>. Try re-import after export.
NOTICE: Timeout set to <timeout>
NOTICE Trying to cleanup before re-importing again.
NOTICE Trying to force import.
NOTICE umount -fr <MountPoint>
NOTICE UmountFS <MountPoint> ...
NOTICE umount -l <MountPoint>
NOTICE umount <MountPoint>
NOTICE umount <MountPoint> done.
NOTICE: umount <MountPoint> failed with error code <RetCode>
NOTICE Umount of <MountPoint> is properly reflected in /proc/mounts, but fails to delete the entry in /etc/mtab file.
NOTICE Umount was successful.
NOTICE Umount was successful. Remove symbolic link <MountPoint>.
NOTICE Using direct mount point.
NOTICE Using indirect mount point with symbolic link method.
NOTICE: using normal system files
NOTICE: using xxx.pcl system files
NOTICE: virtual alias <Address>  reconfigured from <OldInterface> to <NewInterface>
NOTICE: virtual alias <Address> is reconfigured, old is <Interface1> new is <Interface2>
NOTICE: Wait for OfflineDone script. Trying again ...
NOTICE: Waiting for <application> to become Offline.
NOTICE: Waiting for the last application to become Offline ...
NOTICE You might want to increase the maximum number of aliases ...
NOTICE zfs <Zname> is not mounted on <Zmountpoint>. Mounting..
NOTICE: Zone <zone name> is in state <zone state>. About to attach …
NOTICE: Zone <zone name> is in state <zone state>. About to boot …
NOTICE zpool force import failed with exit value <RetCode>.
NOTICE zpool import failed with exit value <RetCode>.
NOTICE zpool import of <ZpoolName> is done.
NOTICE zpool <ZpoolName> already exported or not imported.
NOTICE zpool <ZpoolName> export failed with code <RetCode>. Trying force export ...
NOTICE zpool <ZpoolName> is not imported, running hvexec -Fzpool -c <ZpoolName> ...
NOTICE zpool <ZpoolName> was already imported.
NOTICE ZpoolDeviceOptions=<ZpoolDeviceOptions>
cannot grab mount lock for dostat() check_getbdev(), returning previous state
cannot unlock mount lock for dostat() check_getbdev()
dostat found <info> returning 0
WARNING: /etc/opt/FJSVsdx/bin/sdxiofencing -P -t all failed. class is <class>. Return code <code>
WARNING: /etc/opt/FJSVsdx/bin/sdxiofencing -P -t slice failed. class is <class>. Return code <code>
WARNING: Active disk in <class> not found
WARNING Application <application> cannot be brought Online due to Application <application> being in <current state> state. <application> has LicenseToKillWait=yes set.
WARNING Application <application> cannot be brought Online due to Application <application> being in Faulted or Inconsistent state on <host>. <application> has LicenseToKillWait=yes set.
WARNING Application <application> cannot be brought Online due to Application <application> being in Online State. <application> belongs to the same set as <application> and has a higher priority.
WARNING Application <application> cannot be brought Online due to Application <application> being Online in maintenance mode. <application> belongs to the same set as <application> and has a higher priority.
WARNING Application <application> cannot be brought Standby due to Application <application> being in Online State. <application> belongs to the same set as <application> and has a higher priority.
WARNING Application <application> cannot be brought Standby due to Application <application> being Online in maintenance mode. <application> belongs to the same set as <application> and has a higher priority.
WARNING Application <application> cannot be brought Standby due to Application <application> going Online. <application> belongs to the same set as <application> and has a higher priority.
WARNING Attempt to create another mount point failed.
WARNING: Cannot allocate memory for zfs entry, current size = <Size> for <Value> entry. Cannot detect the zpool properly.
WARNING Cannot assign any interface to <interface>.
WARNING Cannot configure a basic interface <ipaddress> on <interface> without -s option.
WARNING Cannot find <ipaddress> on interface <interface>.
WARNING: Cannot find the legacy mount point <MountPoint> in vfstab[.pcl]. Cannot monitor this resource.
WARNING: Cannot find the zpool info from bdev field (<Bdev>) of mpinfo <MountPoint> in vfstab[.pcl]. Cannot monitor this resource.
WARNING Cannot unconfigure <address> on basic interface <interface> without -s option.
WARNING: check reservation failed. disk is <disk>. opt is <option>
WARNING <Command> called without any ZFS mount point name.
WARNING <Command> called without any zpool name.
WARNING Configuration of interface <interface> failed, undoing changes. Link may be down !
WARNING Could Not Reset Mac Address for <interface>
WARNING Could not set Mac Address <takeovermac> for <interface>
WARNING: disk in <class> not found
WARNING: disk is <disk>. opt is <option>. counter is <counter>
WARNING Ensuring only allowed applications continue to run, with respect to policy based failover management (LicenseToKill/AutoBreak), must be done manually, if required, by executing "hvswitch -p <application>", after <application> leaves maintenance mode!
WARNING Ensuring only allowed applications continue to run, with respect to policy based failover management (LicenseToKill/AutoBreak), must be done manually, if required, by executing "hvswitch -p <application>", after <application> leaves maintenance mode, because AutoBreakMaintMode = no!
WARNING Force export of zpool <ZpoolName> failed with code <RetCode>.
WARNING hv_filesys-c called without any mount point.
WARNING hv_filesys-u called without any mount point.
WARNING hv_ipalias-c called without any interface name.
WARNING hvutil -m forceoff <application> failed with error code <return value> (<error output>).  <application> will not be brought out of maintenance mode!
WARNING hvutil -m off <application> failed with error code <return value> (<output value>).  <application> will not be brought out of maintenance mode!
WARNING No ping at all succeeded.
WARNING: Not enough field in zfs list output at line no <LineNo>.
WARNING: reserve failed. disk is <disk>. opt is <option>
WARNING: Retry limit reached
WARNING: Return code <code>. Trying again ...
WARNING: Return code <code1>, <code2>. Trying again ...
WARNING ScanVfstab called without second parameter.
WARNING The base interface <interface> is not yet configured.  A virtual interface cannot be assigned on top of it!
WARNING The file system <mountpoint> may not be unmounted.
WARNING The file system <mountpoint> was not unmounted.
WARNING The interface <interface> was already present, but it could not be re-configured successfully.
WARNING: The pool name <ZpoolName> for legacy mountpoint <MountPoint>, which is used during the configuration time, is different from the entry in vfstab[.pcl] file, <Bdev>. Use the one in the vfstab[.pcl].
WARNING To avoid a possible deadlock situation, Application <application> cannot be brought Online due to Application <application> already coming <intended state>.  <application> has LicenseToKillWait=yes set.
WARNING Trouble with <interface>, recovering ...
WARNING Application <application> cannot be brought Online due to Application <application> being in Faulted State. <application> belongs to the same set as <application> and has a higher priority.
WARNING Application <application> cannot be brought Online due to Application <application> being still in Wait State. <application> belongs to the same set as <application> and has a higher priority.
WARNING Application <application> cannot be brought Online due to Application <application> going Online. <application> belongs to the same set as <application> and has a higher priority.
WARNING Application <application> cannot be brought Online due to Application <application> is probably going to start. <application> belongs to the same set as <application> and has a higher priority.
WARNING Application <application> cannot be brought Standby due to Application <application> being in Online State. <application> belongs to the same set as <application> and has lower or higher priority.
WARNING Application <application> cannot be brought Standby due to Application <application> going Online. <application> belongs to the same set as <application> and has lower or higher priority.
WARNING <application> could not determine its own AutoBreak value.  Assuming a value of <break value> ...
WARNING <command> called without any application sequence.
WARNING <hv_ipalias-u> called without any ipname.
WARNING <interface> is already configured, but no hosts were successfully pinged.
WARNING <interface> is already configured, but not with the requested addresss IpAddress.
WARNING <interface> was already configured with a different address.
WARNING <path>/<config>.apps does not exist. No processing of application priorities is possible!
WARNING: doopenread of mount-device (pid xxxx),counter=x not done yet reporting status, waiting ...
WARNING: dostat of mount-point (pid xxxx),counter=x not done yet reporting status, waiting ...
WARNING: failed to ascertain share status for <mount> within <maxretry> try/tries
WARNING: failed to open/read <tempfile> within <maxretry> try/tries.
WARNING: failed to popen <mount> within <maxretry> try/tries.
WARNING: failed to popen <mount> within <retries> try/tries
WARNING: failed to stat <mount> within <maxretry> in try/tries.
WARNING: failed to stat <mount> within <retries> try/tries
WARNING: failed to write tempfile <file>, with return code <return>
WARNING: failed to write tempfile <tempfile> within <maxretry> try/tries
WARNING: If the major/minor device numbers are not the same on all cluster hosts, clients will be required to remount the file systems in a failover situation!
WARNING: ip addr add <ipaddress>/<netmask> failed (<errorcode>).
WARNING: ip link set dev <interface> up failed (<errorcode>).
WARNING: <mountpount>, counter=<retrycount> not done yet reporting status, waiting ...
WARNING: <resource > is mounted and the NFS server is not reachable, so returning offline because mounted read only or application is non-switchable or NFS server under RMS control.
WARNING:  Root access is essential for most functionality!
WARNING: stat of <mountpoint> failed
WARNING: status processing timed out, returning previous state <state>
WARNING: hvnfsrestart: The IpAddress <Ipaddress Resource> failed to reach the state Offline in a safe time limit of 180 seconds. This may be a potential problem later.
ERROR: /opt/SMAW/bin/hvawsipalias: External command failed. exit=<exit_code> detail=<aws_command options>
ERROR: /opt/SMAW/bin/hvawsipalias: Invalid configuration file. file=<config>,KeyName=<KeyName>
ERROR: /opt/SMAW/bin/hvawsipalias: Invalid configuration file. file=<config>,Mode=<Mode>
ERROR: /opt/SMAW/bin/hvawsipalias: IpAddress not found in the <file>.
ERROR: /opt/SMAW/bin/hvawsipalias: Multiple KeyNames are defined. KeyName=<KeyName>
ERROR: /opt/SMAW/bin/hvawsipalias: The configuration file <config> does not exist.
ERROR: /opt/SMAW/bin/hvazureipalias: External command failed. exit=<exit_code>, detail=<azure_command_options>
ERROR: /opt/SMAW/bin/hvazureipalias: Invalid configuration file. file=<config>, KeyName=<KeyName>
ERROR: /opt/SMAW/bin/hvazureipalias: Invalid configuration file. file=<config>, Mode=<Mode>
ERROR: /opt/SMAW/bin/hvazureipalias: Multiple KeyNames are defined. KeyName=<KeyName>
ERROR: /opt/SMAW/bin/hvazureipalias: The configuration file <config> does not exist.
ERROR: /opt/SMAW/bin/hvscsireserve preempt-abort failed. prekey is <prekey>
ERROR: /opt/SMAW/bin/hvscsireserve --read-keys failed. Return code <code>
ERROR: /opt/SMAW/bin/hvscsireserve --read-reservation failed. Return code <code>. stdout is :
ERROR: /opt/SMAW/bin/hvscsireserve --read-reservation succeeded, but cannot get key. stdout is :
ERROR: /opt/SMAW/bin/hvscsireserve reserve failed. key is <key>
ERROR: appOnlineOrWait: fork() failed: <errmsg>
ERROR Avoid data corruption and abort the umount action to prevent a failover
ERROR: az login failed. AppID=<AppID>, TenantID=<TenantID>, CertPath=<CertPath>
ERROR Cannot able to determine zpool version.
ERROR Cannot create directory <mountpoint>.
ERROR Cannot create symlink <mountpoint>.
ERROR: cannot get any disk for Persistent Reservation in blockdevice
ERROR cannot read alias file <alias_file>.
ERROR Cannot recreate NFS symbolic link <mountpoint> to <entry>
ERROR cannot remove existing directory <mountpoint> to make symbolic link <mountpoint>.
ERROR cannot remove existing symbolic link <mountpoint>.
ERROR: CF is not running.
ERROR: CF node name not found in the configuration file. file=<config>
ERROR: check_getbdev: fork() failed: <errormsg>
ERROR: checkReservation <disk> <reservation_key> <option> failed
ERROR: "<command>" exited with code <state>
ERROR: could not parse server name for mount point <mount>
ERROR Create <ipaddress> on interface <interface> failed. Out of alias slots.
ERROR: device <device> in /etc/fstab.pcl is not supported
ERROR: Disk device for I/O fencing can not be found
ERROR: doCheckReservation <option> failed. Return code <code>
ERROR: doopenread: fork() failed: <errormsg>
ERROR: dopopen: fork() failed: <errormsg>
ERROR: doRelease <option> failed. Return code <code>
ERROR: doReserve <option> failed. Return code <code>
ERROR: dostat: fork() failed: <errormsg>
ERROR e2fsck -p <device> failed with error code <code>
ERROR: Failed to access disk <disk>.
ERROR: Failed to check AWS resource ID. id=<ID>
ERROR: Failed to check Azure resource ID. id=<ID>
ERROR Failed to mount <Zname> on <Zmountpoint>. Return code <RetCode>.
ERROR Failed to ping <nfsserver>
ERROR Failed to share <Zname>. <Output>. Return code <RetCode>.
ERROR: get SCSIdevice failed. opt is <option>. Return code <code>
ERROR: getSCSIdevice failed. opt is <option>. Return code <code>
ERROR hv_nfs-c/hv_nfs-v called without any mount point.
ERROR <hv_nfs-c/u> cannot determine OldMountType.
ERROR <hv_nfs-c/u> <mountpoint> already has local filesystem mounted on it.
ERROR <hv_nfs-c/u> <mountpoint> is a symbolic name and its target already has local filesystem mounted on it.
ERROR <hv_nfs-c/u> <mountpoint> is a symbolic name and its target already has NFS filesystem mounted on it.
ERROR <hv_nfs-c> <mountpoint> already has NFS filesystem mounted on it.
ERROR: Illegal variable in getReadKeys. disk is <disk>. opt is <option>
ERROR: Illegal variable. disk is <disk>. key is <key>. opt is <option>
ERROR: Illegal variable. disk is <disk>. opt is <option>
ERROR: Incorrect KeyName is set. KeyName=<KeyName>
ERROR: Invalid configuration file. file=<config>,KeyName=<KeyName>
ERROR: Invalid configuration file. file=<config>, KeyName=<KeyName>
ERROR: Invalid configuration file. file=<config>,Mode=<Mode>
ERROR: Invalid configuration file. file=<config>, Mode=<Mode>
ERROR Invalid MAC Address <macaddr> for the interface <interface> Valid format is fld:fld:fld:fld:fld:fld where fld matches pattern "[0-9a-z][0-9a-z]"
ERROR: key is <key>. opt is <option>. disk is <disk>. Return code <code>
ERROR Lost <mountpoint>, about to re-configure
ERROR Lost <ZfsMountPoint>, about to reconfigure ...
ERROR Lost <ZpoolName>, about to reconfigure ...
ERROR <mountpoint> cannot be a mount point.
ERROR <mountpoint> is not of type nfs.
ERROR: Multiple KeyNames are defined. KeyName=<KeyName>
ERROR: Multiple modes are set to a single KeyName. KeyName=<KeyName>,Mode=<Mode1,Mode2>
ERROR One or more ZFS file systems not mounted. Online processing of <ZpoolName> failed.
ERROR One or more zfs mountpoint directories is mounted. <Output>. Cannot import zpool <ZpoolName>.
ERROR One or more zfs mountpoint directories is shared. <Output>. Cannot import zpool <ZpoolName>.
ERROR Only legacy ZFS mount points are supported.
ERROR: opt is <option>. disk is <disk>. Return code <code>
ERROR: popen: fork() failed: <errormsg>
ERROR: probing of device <mount> failed <errorno>
ERROR: Request rejected: userApplication is in Maintenance Mode! - <application>
ERROR Restart <command args>
ERROR: Return code <code>
ERROR: server <nfsserver> is not responding
ERROR Testing i/o on <mountpoint> failed.
ERROR: The certificate file <certificate_file_path> does not exist.
ERROR: The combination of Instance and its IP address is invalid. InstanceIPAddress=<VirtualMachineIPAddress>, InstanceID=<ResourceID>
ERROR: The configuration file <config> does not exist.
ERROR: The ENIID and Instance do not match. ENIID=<ENIID>, Instance-ID=<Instance-ID>
ERROR: The RMS file system /opt/SMAW/SMAWRrms is full.  Proper RMS operation, including this detector can no longer be guaranteed!  Please make space in the file system.
ERROR: The RMS file system /opt/SMAW/SMAWRrms is full.  The detector can no longer function properly and the stat routine will report (perhaps incorrectly) ok, as long as the condition persists!
ERROR There are no interfaces defined for <host> <interface> in <ipconffile>.
ERROR There is no address defined for <interface> in <hostsfile>.
ERROR There is no address defined for <interfacename> in <hostsfile>.
ERROR There is no entry of mount point <mount point> of type <value>.
ERROR There is no generic driver for <interfacename>.
ERROR There is no interface <interface> in <path>, aborting startup.
ERROR There is no interface <value> in /etc/hosts, aborting startup.
ERROR There is no netmask or prefix defined for <host> <interface> in <ipconffile>.
ERROR There is no or no valid netname for <interfacename> in <hostsfile>.
ERROR There is no RMS entry for <mountpoint> in <fstab>.
ERROR There is no RMS entry for <mountpoint> in <dfstab>.
ERROR There is no RMS entry for shared file system <mount point> in <path>.
ERROR There is no RMS entry for <ZfsMountPoint> in <Vfstab> for zpool <ZpoolName>.
ERROR There is no script <path>/hv_<value>-<value>, returning FALSE.
ERROR Timeout configuring <mountpoint>
ERROR Timeout deconfiguring <mountpoint>
ERROR Timeout in <command>
ERROR Timeout in hvexec during precheck phase.
ERROR Timeout in hv_filesys-c
ERROR Timeout in hv_filesys-u
ERROR Timeout in hv_ipalias-c.
ERROR Timeout in hv_ipalias-u.
ERROR Timeout in hv_zfs-c
ERROR Timeout in hv_zfs-u
ERROR Timeout processing Fuser <mountpoint>
ERROR Timeout processing Fuser <ZfsMountPoint>.
ERROR Timeout processing ifadmin -d <ipaddress>.
ERROR Timeout processing umount <mountpoint>
ERROR Timeout processing UmountFS <option>.
ERROR Timeout processing UmountFS <umountoptions> <target>
ERROR Timeout processing UmountFS <ZfsPoolName>.
ERROR Timeout processing ZfsFuser <ZpoolMountPoint>
ERROR Timeout processing zpool export <ZpoolName>
ERROR Timeout while importing <ZpoolName>.
ERROR Timeout while mounting <ZfsMountPoint>
ERROR Timeout while mounting zfs mounts for <ZpoolName>.
ERROR Timeout while processing Fuser <ZfsMountPoint>
ERROR Timeout while processing UmountFS -L <mountpoint>
ERROR Timeout processing <fuser/lsof> -s <nfsserver> <mountpoint>
ERROR Timeout while processing temporary mount
ERROR Timeout while zpool import <ZpoolName>.
ERROR: Undefined CF node name is set. CF node name=<CF>
ERROR: unknown interface <interface>
ERROR Wrong file system type <Type> for <ZfsMountPoint> in <Vfstab>.
ERROR: xfs_repair <device> failed with error code <code>.
ERROR zpool name cannot contain '/'.
FATAL: cannot get any disk for Persistent Reservation in diskclass <classlist>
FATAL ERROR: cannot get any disk for Persistent Reservation in diskclass <diskclass>
FATAL ERROR: exit because child process returned <lyingretries> times with signal <signal>
FATAL ERROR: shutting down RMS!
FATAL ERROR: <Resource> cannot allocate a memory for azpool_t structure.
FATAL ERROR: <Resource> cannot initialize the cmdjob for zfs list.
FATAL ERROR: <Resource> cannot initialize the cmdjob for zpool list.
FATAL ERROR: <resource> could not open the hvgdconfig file <file>.  This error must be corrected before RMS will successfully start!
FATAL ERROR: <Resource> does not have an entry for zpool name in the hvgdconfig file.  This error must be corrected for RMS to start.
FATAL ERROR: <resource> does not have an entry in the hvgdconfig file.  This error must be corrected before RMS will successfully start!
FATAL ERROR: The detection for unknown resource has been requested. shutting down RMS!
FATAL ERROR: The zfs detector (pid = <Pid>) could not allocate the context structure for the resource <Resource> because of malloc failure.
FATAL ERROR: The zfs detector (pid = <Pid>) could not find detector configuration file <File> for the resource <Resource>.
/etc/rc3.d/S99RMS: NOTICE: RMS configuration file not exist or not readable - RMS not starting.
1.doopenread の WARNING    WARNING: doopenread of mount-device (pid xxxx),    counter=x not done yet reporting status, waiting ...2.doopenread の NOTICE    NOTICE: doopenread of mount-device (pid xxxx),    counter=x not done yet reporting status, waiting ...3.dostat  の WARNING    WARNING: dostat of mount-point (pid xxxx),    counter=x not done yet reporting status, waiting ...4.dostat  の NOTICE    NOTICE: dostat of mount-point (pid xxxx),    counter=x not done yet reporting status, waiting ...
Assertion condition failed.
BEWARE: 'hvshut -f' may break the consistency of the cluster. No further action may be executed by RMS until the cluster consistency is re-established.This re-establishment includes restart of RMS on the shut down host. Do you wish to proceed?(yes = shut down RMS / no = leave RMS running).
BEWARE: the hvreset command will result in a reinitialization of the graph of the specified userApplication. This affects basically the RMS state engine only. The re-initialization does not mean, that activities invoked by RMS so far will be made undone. Manual cleanup of halfway configured resources may be necessary. Do you wish to proceed?(yes = reset application graph / no = abort hvreset).
Can't open modification file.
Cannot start RMS! BM is currently running.
Change dest_object to node.
command1 cannot get list of resources via <command2> from hvcm.
Command aborted.
command: bad state: state.
command: bad timeout: timeout.
command: cannot open file filename.
command: cannot put message in queue
command: could not create a pipe
command failed due to errors in <argument>.
command: failed due to undefined variable: local_host.
<command> failed with exit code exitcode
command: file already exists
command: message queue is not ready yet!
command: Must be super-user to issue this command
command: RMS is not running
Command timed out!
Could not open localfile or could not create temporary file filename
Could not restart RMS. RELIANT_PATH not set.
debugging is on, watch your disk space in /var
Delay delay seconds.....
DISCLAIMER: The hvdump utility will collect the scripts, configuration files, log files and any core dumps. These will be shipped out to RMS support. If there are any proprietary files you do not want included, please exit now. Do you want to proceed? (yes = continue / no = quit)
DISCLAIMER: The hvdump utility will now collect the necessary information. These will be shipped to RMS support.
Dynamic modification is in progress, can't assert states.
ERROR: Assertion terminated: RMS is shutdown
Error becoming a real time process: errorreason
ERROR: Forcibly switch request denied, the following node(s) are in LEFTCLUSTER state: nodes
ERROR: Forcibly switch request denied, unabele to kill node <node>
ERROR: Hvshut terminates due to timeout, some objects may still be Online.
ERROR: Local SysNode must be specified
ERROR: Maintenance Mode request cannot be processed. A userApplication is not in an appropriate state on remote hosts. See switchlog of remote hosts for details! - userApplication
ERROR: Maintenance Mode request cannot be processed. The state of the following objects is conflicting with the state of their parents. Leaving maintenance Mode now will cause a Fault to occur.
Error setting up real time parameters: errorreason
Error while starting up bm on the remote host <targethost>: errorreason
Error while starting up local bm: errorreason
Failed to dup a file descriptor.
Failed to exec the hvenv file <hvenvfile>.
Failed to open pipe.
FATAL ERROR Could not restart RMS. Restart count exceeded.
FATAL ERROR: Could not restart RMS. Restart script (script) does not exist.
FATAL ERROR: Could not restart RMS. Failed to recreate RMS restart count file.
FATAL ERROR: RMS has failed to start!
File open failed (path): errorreason.
File system of directory <directory> has no data blocks !!
Forced shut down on the local cluster host!
Fork failed.
hvutil: Could not determine if RMS is running on <targethost>, errno exitcode
hvutil: Could not determine IP address of <targethost>
hvutil: debug option must be a positive number for on, 0 for off.
hvutil: Detector time period must be greater than minimumtime.
hvutil: Failed to allocate socket
hvutil: Missing /etc/services entry for "rmshb"
hvutil: Notify string is longer than mesglen bytes
hvutil: RMS is not running on <targethost>
hvutil: RMS is running on <targethost>
hvutil: The resource <resource> does not have a detector associated with it
hvutil: The resource <resource> is not a valid resource
hvutil: time period of detector must be an integer.
hvutil: Unable to open the notification file <path> due to reason: reason
Invalid delay.
It may take few seconds to do Debug Information collection.
localfile filename does not exist or is not an ordinary file
Name of the modification file is too long.
NOTICE: failed to open/read device mount-device
NOTICE: RMS died but has been successfully restarted,reconnecting
NOTICE: User has been warned of 'hvshut -A' and has elected to proceed.
NOTICE: User has been warned of 'hvshut -f -a' and has elected to proceed.
NOTICE: User has been warned of 'hvshut -L' and has elected to proceed.
RELIANT_LOG_PATH is not defined
RELIANT_PATH is not defined
Remote host <hostname> is not Online.
Remote host does not exist - host.
Remote system is not online.
Reset of RMS has been aborted.
Resource does not exist - resource.
Resource is already online on target node
resource is not in state state.
Resource type must be userApplication or gResource
Request cannot be processed. The following resource(s) are unexpectedly online
RMS environment failure: Failed to set environment using hvenv. Default values for environment variables will be used.
RMS environment failure: Failed to set environment variable <path> for command: <errno>.
RMS environment failure: The following required variable is not defined in RMS environment:
RMS environment failure: <function> failed with errno <errno>.
RMS has failed to start! didn't find a valid entry in the RMS default configuration file "configfilename"
RMS has failed to start! 'hvcm' has been invoked without specifying a configuration with the -c attribute, but with specifying other command line options. This may cause ambiguity and is therefore not possible. Please specify the entire commandline or use 'hvcm' without further options to run the default configuration.
RMS has failed to start! invalid entry in the RMS default configuration file "configfilename"
RMS has failed to start! multiple entries in the RMS default configuration file "configfilename"
RMS has failed to start! RELIANT_HOSTNAME is not defined in the RMS environment
RMS has failed to start! the number of arguments specified at the command line overrides the internal buffer of the RMS start utility
RMS has failed to start! the number of arguments specified at the RMS default configuration file "configfilename" overrides the internal buffer of the RMS start utility
RMS has failed to start! the options "-a" and "-s" are incompatible and may not be specified both
rms is dead
RMS on node node could not be shutdown with hvshut -A.
Root access required to start hvcm
Sending data to resource.
Shutdown of RMS has been aborted.
Starting Reliant Monitor Services now
Starting RMS on remote host host now
startup aborted per user request
systemctl command exited with retcode
The command 'command' could not be executed
The command 'command' failed to reset uid information with errno 'errno' - 'errorreason'.
The command 'command' failed to set the effective uid information with errno 'errno' - 'errorreason'.
The configuration file "<nondefaultconfig>" has been specified as option argument of the -c option, but the Wizard Tools activated configuration is "<defaultconfig>" (see <defaultconfig>). The base monitor will not be started. The desired configuration file must be re-activated by using PCS Wizard activation command.
The file 'filename' could not be opened: errormsg
The length of return message from BM is illegal (actuallength actual expectedlength expected).
The state of RMS service is not active but state
The state of RMS service is not online/degraded but state
The system call systemcall could not be executed: errormsg
The use of the -f (force) flag could cause your data to be corrupted and could cause your node to be killed. Do not continue if the result of this forced command is not clear. The use of force flag of hvswitch overrides the RMS internal security mechanism. In particular RMS does no longer prevent resources, which have been marked as "ClusterExclusive", from coming Online on more than one host in the cluster. It is recommended to double check the state of all affected resources before continuing. Do you wish to proceed ? (default: no) [yes, no]:
The userApplication is in the state Inconsistent on any node
The userApplication must be in the state Online, Offline or Standby on target node
The user has invoked the hvcm command with the -a flag on a host where RMS is already running, sending request to start all remaining hosts.
timed out! Most likely rms on the remote host is dead.
timestamp: NOTICE: User has been warned of 'hvshut -f' and has elected to proceed.
Too many asserted objects, maximum is the max.
Unable to execute command: command
Unable to start RMS on the remote host using cfsh, rsh or ssh
Usage: hvassert [-h SysNode] [-q] -s resource_name resource_state | [-h SysNode] [-q] -w resource_name resource_state seconds | [-h SysNode] [-q] -d resource_name state_detail [seconds]
Usage: hvcm [-V] [-a] [-s targethost] [-c config_file] [-h time] [-l level]
Usage: hvconfig -l | -o config_file
Usage: hvdisp {-a | -c | -h | -i | -l | -n | -S resource_name [-u | -c] | -z resource_name | -T resource_type [-u | -c] | -u | resource_name | ENV | ENVL} [-o out_file]
Usage: hvdump {-g | -f out_file | -t wait_time}
Usage: hvlogclean [-d]
Usage: hvreset [-t timeout] userApplication
Usage: hvshut {-f | -L | -a | -l | -s SysNode | -A}
Usage: hvswitch [-f] userApplication [SysNode] | -p userApplication
Usage: hvswitch [-f] userApplication [SysNode] | [-f] resource SysNode | -p userApplication
Usage: hvutil {-a | -d | -c | -s} userApplication | -f [-q] userApplication | {-t n | -N string } resource | -L {level | display} resource | {-o | -u} SysNode | -l {level | display} | -w | -W | -i {all | userApplication} | -r | -m {on|off|forceoff} userApplication | -M {on|off|forceoff}
Usage: hvutil {-a | -d | -c | -s} userApplication | -f [-q] userApplication | {-f | -c} resource | {-t n | -N string } resource | -L {level | display} resource | {-o | -u} SysNode | -l {level | display} | -w | -W | -i {all | userApplication} | -r | -m {on|off|forceoff} userApplication | -M {on|off|forceoff}
WARNING: The '-L' option of the hvshut command will shut down the RMS software without bringing down any of the applications. In this situation, it would be possible to bring up the same application on another node in the cluster which *may* cause data corruption.Do you wish to proceed ? (yes = shut down RMS / no = leave RMS running).
WARNING: The '-A' option of the hvshut command will shut down the RMS software without bringing down any of the applications on all hosts in the cluster.Do you wish to proceed ? (yes = shut down RMS on all hosts / no = leave RMS running).
WARNING: There is an ongoing kill of cluster host(s) <nodes>. If the host <node> is needed in order to provide failover support for applications on the host(s) <nodes> then this hvshut command should be aborted.Do you wish to proceed with the hvshut of host <node> (yes = shut down RMS / no = leave RMS running).
WARNING: You are about to attempt to resolve a SysNode 'Wait' state by telling RMS that the node in question (<sysnode>) has not actually gone down.  This option will only work if, and only if, the cluster node and the RMS instance on that cluster node have been continuously up since before the 'Wait' state began. If the RMS instance on that cluster node has gone down and been restarted this option (hvutil -o) will not work and may cause the RMS instance on that node to hang.  If the RMS instance on that node has gone down and been restarted, shut it down again (hvshut -f) and run the 'hvutil -u <sysnode>' command on this cluster host and then restart RMS on the other cluster node.
WARNING: Data corruption may occur, if the Sysnode referred to as option-argument of the '-u' option hasn't been completely deactivated.Do you wish to proceed ? (default: no) [yes, no]:
cfconfig: cannot load: #0423: generic: permission denied
cfconfig: cannot load: #041f: generic: no such file or directory および cfconfig: check that configuration has been specified
cfconfig: cannot load: #0405: generic: no such device/resource および cfconfig: check if configuration entries match node's device list
cfconfig: cannot load: #04xx: generic: reason_text
cfconfig: cannot unload: #0406: generic: resource is busy および cfconfig: check if dependent service-layer module(s) active
cfconfig: cannot unload: #04xx: generic:reason_text
cfconfig: specified nodename: bad length: #407: generic: invalid parameter
cfconfig: invalid nodename: #407: generic: invalid parameter
cfconfig: node already configured: #0406: generic: resource is busy
cfconfig: too many devices specified: #0407: generic: invalid parameter
cfconfig: clustername cannot be a device: #0407: generic: invalid parameter
cfconfig: invalid clustername: #0407: generic: invalid parameter
cfconfig: duplicate device names specified: #0407: generic: invalid parameter
cfconfig: device [device [...]]:#0405: generic: no such device/resource
cfconfig: cannot open mconn: #04xx: generic:reason_text
cfconfig: cannot set configuration: #04xx: generic: reason_text
cfconfig: cannot get new configuration: #04xx: generic: reason_text
cfconfig: cannot load: #04xx: generic: reason_text
cfconfig: Invalid argument device: '#0405: generic: no such device/resource'
cfconfig: Too many argument device: '#0405: generic: no such device/resource'
cfconfig: Invarld IP address device: '#0405: generic: no such device/resource'
cfconfig: cannot get configuration: #04xx: generic: reason_text
cfconfig: cannot get joinstate: #0407: generic: invalid parameter
cfconfig: cannot delete configuration: #0406: generic: resource is busy
cfconfig: cannot delete configuration: #04xx: generic: reason_text
cipconfig: could not start CIP - detected a problem with CF. または cipconfig: cannot open mconn: #04xx: generic:reason_text
cipconfig: cannot setup cip: #04xx: generic: reason_text
cipconfig: cannot unload cip: #04xx: generic: reason_text
cftool: CF not yet initialized
cftool: failed to get cluster name: #xxxx: service: reason_text
cftool: cannot open mconn: #04xx: generic: reason_text
cftool: cannot open mconn: #04xx: generic: reason_text
cftool: nodename: No such nodecftool: cannot get node details: #xxxx: service:reason_text
cftool: cannot open mconn: #04xx: generic: reason_text
cftool(down): illegal node number
cftool(down): not executing on active cluster node
cftool(down): cannot declare node down: #0426: generic: invalid node namecftool(down): cannot declare node down: #0427: generic: invalid node numbercftool(down): cannot declare node down: #0428: generic: node is not in LEFTCLUSTER state
cftool(down): cannot declare node down: #xxxx: service:reason_text
cftool: cannot get nodename: #04xx: generic: reason_textcftool: cannot get the state of the local node: #04xx: generic:reason_text
cftool: cannot open mconn: #04xx: generic: reason_textcftool: cannot get icf mac statistics: #04xx: generic: reason_text
cftool: cannot get node id: #xxxx: service: reason_textcftool: cannot get node details: #xxxx: service:reason_text
cftool: cannot open mconn: #04xx: generic: reason_text
cftool: cannot get node details: #xxxx: service: reason_text
cftool: cannot open mconn: #04xx: generic: reason_textcftool: clear icf statistics: #04xx: generic:reason_text
cftool: cannot open mconn: #04xx: generic:reason_textcftool: unexpected error retrieving version: #04xx: generic: reason_text
rcqconfig -a node-1 node-2 . . . . node-n -g and -a cannot exist together.
Nodename is not valid nodename.
rcqconfig : failed to start
rcqconfig failed to configure qsm since quorum node set is empty.
cfreg_start_transaction:`#2813: cfreg daemon not present`
cfreg_start_transaction:`#2815: registry is busy`
cfreg_start_transaction:`#2810: an active transaction exists`
Too many nodename are defined for quorum. Max node = 64
cfreg_get:`#2809: specified transaction invalid`
cfreg_get:`#2819: data or key buffer too small`
Cannot add node node that is not up.
Cannot proceed. Quorum node set is empty.
cfreg_put:`#2809: specified transaction invalid`
cfreg_put:`#2820: registry entry data too large`
stopping quorum space methods `#0408: unsuccessful`
-g and -x cannot exist together.
Nodename is not valid nodename.
rcqconfig : failed to start
cfreg_start_transaction:`#2813: cfreg daemon not present`
cfreg_start_transaction:`#2815: registry is busy`
cfreg_start_transaction:`#2810: an active transaction exists`
Too many ignore node names are defined for quorum.Max node = 64
cfreg_get:`#2809: specified transaction invalid`
cfreg_get:`#2804: entry with specified key does not exist`
cfreg_get:`#2819: data or key buffer too small`
Can not add node node that is not up.
Can not proceed. Quorum node set is empty.
cfreg_put:`#2809: specified transaction invalid`
cfreg_put:`#2820: registry entry data too large`
cfreg_put:`#2807: data file format is corrupted`
cms_post_event: `#0c01: event information is too large`
-g and -m cannot exist together.
Methodname is not valid method name.
rcqconfig : failed to start
cfreg_start_transaction:`#2813: cfreg daemon not present`
cfreg_start_transaction:`#2815: registry is busy`
cfreg_start_transaction:`#2810: an active transaction exists`
Too many method names are defined for quorum. Max method = 8
cfreg_get:`#2809: specified transaction invalid`
cfreg_get:`#2804: entry with specified key does not exist`
cfreg_get:`#2819: data or key buffer too small`
cfreg_put:`#2809: specified transaction invalid`
cfreg_put:`#2820: registry entry data too large`
cfreg_put:`#2807: data file format is corrupted`
cms_post_event: `#0c01: event information is too large`
-g and -d cannot exist together.
Nodename is not valid nodename.
rcqconfig : failed to start
cfreg_start_transaction:`#2813: cfreg daemon not present`
cfreg_start_transaction:`#2815: registry is busy`
cfreg_start_transaction:`#2810: an active transaction exists`
Too many nodename are defined for quorum. Max node = 64
cfreg_get:`#2809: specified transaction invalid`
cfreg_get:`#2804: entry with specified key does not exist`
cfreg_get:`#2819: data or key buffer too small`
cfreg_put:`#2809: specified transaction invalid`
cfreg_put:`#2820: registry entry data too large`
cfreg_put:`#2807: data file format is corrupted`
cms_post_event: `#0c01: event information is too large`
failed to register user event `# 0c0b: user level ENS event memory limit overflow`
WARNING: /etc/panicinfo.conf file already exists.(I)nitialize, (C)opy or (Q)uit (I/C/Q) ?
ERROR: <command> failed
ERROR: <command> failed on <node>
ERROR: <command> timeout
ERROR: failed to distribute index file to <node>
ERROR: failed to distribute /etc/panicinfo.conf file to <node>
ERROR: /etc/sysconfig/netdump is invalid on <node>ERROR: Cannot find the Netdump client's IP address for <device> on <node>
ERROR: failed to change mode of index file on <node>
ERROR: failed to patch rcsd.cfg on <node>
ERROR: failed to change owner of index and rcsd.cfg file on <node>
ERROR: failed to change group of index and rcsd.cfg file on <node>
ERROR: failed to change mode of /etc/panicinfo.conf file on <node>
ERROR: failed to change owner of /etc/panicinfo.conf file on <node>
ERROR: failed to change group of /etc/panicinfo.conf file on <node>
ERROR: internal error, ...
ERROR: Reading the Shutdown Agent configuration failed.
ERROR: Reading the Shutdown Facility configuration failed.
ERROR: The Blade Shutdown Agent configuration cannot be found.
ERROR: The IPMI Shutdown Agent configuration cannot be found.
ERROR: The IPMI Shutdown Agent configuration is different between nodes.
ERROR: The iRMC IP address of <node> is not set correctly in IPMI Shutdown Agent configuration. (iRMC IP address of <node> : <ip-address>)
ERROR: The RSB Shutdown Agent configuration cannot be found.
ERROR: The Shutdown Facility configuration cannot be found.
ERROR: <ファイル名> generation failed.
date time cfbackup: invalid option specified
date time cfbackup: cmd must be run as root
date time cfbackup: ccbr files & directories must be accessible
date time WARNING: cfbackup: specified generation n too small - using p
date time cfbackup [FORCE] n [(TEST)] log started
date time nodename not an active cluster node
date time no runnable plug-ins! cmd aborted.
date time cfbackup n ended unsuccessfully
date time validation failed in pluginname
date time backup failed in pluginname
date time archive file creation failed
date time archive file compression failed
date time cfbackup n ended
date time cfrestore: invalid option specified
date time cfrestore: cmd must be run as root
date time cfrestore: cmd must be run in single-user mode
date time cfrestore: ccbr files & directories must be accessible
date time cfrestore [FORCE] [TREE] [YES] n [(TEST)] log started
date time ERROR: nodename IS an active cluster node
date time cfrestore n ended unsuccessfully
date time no runnable plug-ins! cmd aborted.
date time unable to find selected archive file: archivefile
date time archive file uncompression failed
date time archive file extraction failed
date time archive file recompression failed
date time warning: backup created with FORCE option
date time plugin present at backup is missing for restore: pluginname
date time negative reply terminates processing
date time plugin validation failed
date time cpio copy for cfrestore failed
date time NOTE: no root subdirectory for cpio copy step
date time plugin restore failed
date time cfrestore n ended
0000: Message not found!!
0001: Illegal option.
0002: No system administrator authority.
0003: File not found. (file:file-name)
0004: The edit of the file failed.
0005: Unknown keyword. (keyword:keyword )
0006: The distribution of the file failed.
0007:  The cluster configuration management facility is not running.
0009: The command received a signal.
8000. Cluster application information was registered in the cluster resource management facility.クラスタアプリケーション情報がクラスタリソース管理機構に登録されました。
8001. Cluster application information was removed from the cluster resource management facility.クラスタアプリケーション情報がクラスタリソース管理機構から削除されました。
8002. Cluster application information has been registered in the cluster resource management facility.クラスタアプリケーション情報はクラスタリソース管理機構に登録されています。
8020. Cluster application information will be removed from the cluster resource management facility.Are you sure you want to continue?  (yes/no)クラスタリソース管理機構からクラスタアプリケーション情報を削除しますか? (yes/no)
8050. Cluster application information is not registered in the cluster resource management facility.Add the information.クラスタアプリケーション情報がクラスタリソース管理機構に登録されていません。クラスタアプリケーション情報をクラスタリソース管理機構に登録してください。
8100. illegal option.Usage: clrwzconfig [ -d config_name | -c ]オプションが正しくありません。使用方法: clrwzconfig [ -c | -d config_name ]
8101. The cluster configuration management facility is not running.      Start it on all the cluster nodes.クラスタ制御の構成管理機構が動作していません。クラスタを構成する全ノードでクラスタ制御の構成管理機構を起動してください。
8102. RMS is running.(%s)      Stop it on all the cluster nodes.RMS が起動しています。(%s)クラスタを構成する全ノードで RMS を停止してください。
8103. RMS configuration has not been activated.       Please execute clrwzconfig command after activating RMS configuration(Configuration-Activate).RMS 構成定義ファイルの配布 (Configuration-Activate) が実施されていません。RMS 構成定義ファイルの配布 (Configuration-Activate) を実施後、clrwzconfig コマンドを実行してください。
8104. RMS configuration(%s) is invalid. The current effective configuration is %s.RMS 構成 (%s) が不正です。現在有効な RMS 構成は %s です。
8120. Registration of the Cluster application information failed.(function:%d-%s-%s detail:%d)クラスタアプリケーション情報の登録に失敗しました。(function:%d-%s-%s detail:%d)
8130. Deleting the Cluster application information failed.(function:%d-%s-%s detail:%d)クラスタアプリケーション情報の削除に失敗しました。(function:%d-%s-%s detail:%d)
8140. Checking registration of the Cluster application information failed.(function:%d-%s-%s detail:%d)クラスタアプリケーション情報の登録確認に失敗しました。(function:%d-%s-%s detail:%d)
8150. The clrwzconfig command failed.(function:%d-%s-%s detail:%d)コマンド処理中に異常が発生しました。(function:%d-%s-%s detail:%d)
8151. The clrwzconfig command failed.  The command might have been executed concurrently.(function:%d-%s-%s detail:%d)      Execute clrwzconfig again.コマンド処理中に異常が発生しました。(function:%d-%s-%s detail:%d)clrwzconfig コマンドが同時実行された可能性があります。clrwzconfig コマンドを再実行してください。
ERROR: failed to generate the output file "xxx".DIAG: ...
WARNING: The output file "xxx" may not contain some data files.DIAG: ...
wvstat Warn: Can't connect to server <IP address or hostname>,<port number>
<command> does not exist.
<command> does not execute.
No system administorator authority.(uid:<USERID>)
Error in option specification.Usage:clallshutdown -i state [-g time]
The node in LEFTCLUSTER state exists in cluster.(node:<LEFT_NODE>)
Execute the clallshutdown command to stop the node safely with RMS running.
Fail to stop RMS. (errno:<STATUS>)
The command "cftool -n" cannot finish normally.(errno:<STATUS>)
The command "clexec" cannot finish normally.
<CONF_FILE> does not exist.
All outlets are power-on.
WARNING: getting outlet status failed, ignore. node=nodemane, ip=ipaddress, outlet=number, rc=value
WARNING: outlet is still power-off, ignore. node=nodename, ip=ipaddress, outlet=number
WARNING: outlet power-on failed, ignore. node=nodename, ip=ipaddress, outlet=number, rc=value
ERROR: configuration file SA_rpdu.cfg is not existing. errno=errno
ERROR: getting outlet status failed. node=nodemane, ip=ipaddress, outlet=number, rc=value
ERROR: illegal configuration file. line:number
ERROR: invalid node name. node=nodename
ERROR: ipmitool not found.
ERROR: outlet is still power-off. node=nodename, ip=ipaddress, outlet=number
ERROR: outlet power-on failed. node=nodename, ip=ipaddress, outlet=number, rc=value
cfrecoverdev: cmd must be run as root
cfrecoverdev: cannot create pathreason_text
RCSD returned a successful exit code for this command
A shutdown is in progress. try again later
Could not execlp(RCSD). Errno = errno
Failed to get name product information
Illegal catlog open parameter
mkfifo failed on RCSD response pipe name, errno errno
open failed on rcsdin pipe name, errno errno
RCSD is exiting. Command is not allowed
RCSD returned an error for this command, error is value
read failed, errno errno
select failed, errno errno
The RCSD is not running
unlink failed on RCSD response pipe name, errno errno
Usage: sdtool {-d[on | off] | -s | -S | -m | -M | -r | -b | -C | -l | -e | -k node-name}
write failed on rcsdin pipe name, errno errno
No system administrator authority.
<command> command failed. return_value=<value>.
Could not find <file>.
<file> is invalid.
No system administrator authority.
sfsacfgupdate: ERROR: Could not find ipmitool command.
sfsacfgupdate: ERROR: Failed to change the access permission of <file> on node <node>.
sfsacfgupdate: ERROR: Failed to copy the backup of <file> on node <node>.
sfsacfgupdate: ERROR: Failed to change the group of <file> on node <node>.
sfsacfgupdate: ERROR: Failed to change the owner of <file> on node <node>.
sfsacfgupdate: ERROR: Failed to distribute <file> to node <node>.
sfsacfgupdate: ERROR: <file> generation failed.
sfsacfgupdate: ERROR: ipmi service doesn't start.
sfsacfgupdate: ERROR: Reading the Shutdown Agent configuration failed.
<command> command failed. return_value=<value>.
Could not find <file>.
<file> is invalid.
First SSH connection to the guest domain has not been done yet. (node:nodename ipaddress:ipaddress detail:code)
No system administrator authority.
Saving the configuration information of the logical domain failed.
The domain type is not a control domain.
The guest domain information of the specified node name is not registered. (nodename:nodename)
The Migration function cannot be used in this environment. (nodename:nodename)
The SA <Shutdown Agent> is not registered.
The specified guest domain cannot be connected. (nodename:nodename)
Cannot get CFname node_name on the host OS. ret=ret_code
The check of configuration file succeeded.
Cannot read the kvmguests.conf file correctly.
Command command cannot be executed on the guest OS guest_name. ret=ret_code
Command command executed on the guest OS guest_name failed. ret=ret_code
Connection to the guest OS guest_name is failed. ipaddress: ipaddress,ret=ret_code
Internal error. ret=ret_code, message:error_msg
No system administrator authority.
The guest OS guest_name was disconnected. ret=ret_code
The password decoding failed. ret=ret_code, message:error_msg
The specified option is not correct.
The username or password for logging in to the guest OS guest_name is incorrect.
/etc/opt/FJSVcluster/etc/kvmguests.conf is invalid.
configuration file /etc/opt/FJSVcluster/etc/kvmguests.conf is not existing. errno=<errno>
domain "<domainname>" is not runnning.
failed to get domain "<domainname>" status.
It is not Host OS.
No system administrator authority.
The guest domain information of the specified node name is not registered. (nodename:<domainname>)
The Migration function cannot be used in this environment. (nodename:nodename)
The specified guest domain cannot be connected. (nodename:nodename)
the specified option is invalid. opt=value
An error has been detected in iRMC.
An error has been detected in the transmission route to iRMC.
An error has been detected in the transmission route to MMB.
An error was detected in MMB.
CF is not running.
clirmcsetup is already running.
Error in option specification. (option:option)
Failed to get iRMC/MMB information. Make sure the ipmi service is running.
Fatal error occurred. (function:function detail:code1-code2-code3-code4)
IP address is not set to iRMC.
IP address is not set to MMB.
No system administrator authority.
Required package isn't installed. (packagename)
The IP address version of the admin LAN of shutdown facility does not match that of iRMC.
The IP address version of the admin LAN of shutdown facility does not match that of MMB.
The iRMC/MMB information is not set.
This server architecture is invalid.
The devirmcd daemon does not exist.
The devirmcd daemon exists.
The devirmcd daemon cannot be started because SF is running.
Failed to start the devirmcd daemon.
Failed to stop the devirmcd daemon.
No system administrator authority.
USAGE: clirmcmonctl [{start | restart | stop}]
Failed to start the devirmcd daemon because the server architecture is invalid.
The devirmcd daemon already running.
The devirmcd daemon is not running.
The devmmbd daemon does not exist.
The devmmbd daemon exists.
Failed to start the devmmbd daemon because the server architecture is invalid.
USAGE: clmmbmoncntl [{start | restart | stop}]

メッセージ集の先頭へ