Follow the steps below to set up the environment to run BDCEP in a cluster system (refer to the PRIMECLUSTER manuals for information on installing and operating PRIMECLUSTER):
Install and set up PRIMECLUSTER.
Install PRIMECLUSTER for both the active node and the standby node.
Set up IP address takeover.
Set up PRIMECLUSTER so that the active node and standby node can use IP address takeover to switch and then operate using the same IP address.
Install and set up BDCEP in the active node.
Activate IP address takeover in the active node, and then install and set up (for example, create a CEP engine) BDCEP, as usual (refer to "Chapter 4 Installation and Setup" for details).
Note that you must register the active node server information in the master server connection authorization list when performing Hadoop collaboration (refer to E.3, "Adding a Slave Server/Scaling Out" in the User's Guide of the Interstage Big Data Parallel Processing Server V1.0.0 for details).
You must also specify the active node server information in the ix-environment.xml <node> tag when performing XTP collaboration (refer to Section 4.6.2, "Tasks for the XTP Client Node to be Extended" in the Setup and Operation Guide of the Interstage eXtreme Transaction Processing Server V1.0.0 for details).
Deploy development assets to the active node.
Deploy the required development assets, such as rule definitions, to the active node (refer to "5.6 Deploying Development Assets" for details).
When a status in which events can be processed is reached, stop the CEP engine and the CEP service of the active node.
Save the RC procedures of the active node.
Save the start shell script "S99startis" stored in the directories below to any directory for storing backup resources:
/etc/rc2.d
/etc/rc3.d
/etc/rc4.d
/etc/rc5.d
Save the stop shell script "K00stopis" stored in the directories below to any directory for storing backup resources:
/etc/rc.d/rc0.d
/etc/rc.d/rc1.d
/etc/rc.d/rc6.d
Disable the automatic start of the Interstage Java EE DAS service:
# /sbin/chkconfig --del FJSVijdas <ENTER>
Switch to the standby node.
Use a PRIMECLUSTER cluster switch operation to switch to the standby node.
Install and set up BDCEP in the standby node.
With IP address takeover activated in the standby node, install and set up BDCEP in it. Make the CEP engine settings the same as those in the active node (refer to "Chapter 4 Installation and Setup" for details).
Note that you must register the standby node server information into the master server connection authorization list when performing Hadoop collaboration (refer to E.3, "Adding a Slave Server/Scaling Out" in the User's Guide of the Interstage Big Data Parallel Processing Server V1.0.0 for details).
You must also specify the standby node server information in the ix-environment.xml <node> tag when performing XTP collaboration (refer to Section 4.6.2, "Tasks for the XTP Client Node to be Extended" of the Setup and Operation Guide of the Interstage eXtreme Transaction Processing Server V1.0.0 manual for details).
Deploy development assets to the standby node.
Deploy the same development assets as those in the active node to the standby node (refer to "5.6 Deploying Development Assets" for details).
There is no need to perform work that duplicates that in step 4, "Deploy development assets to the active node", such as providing data and deploying collaboration applications. Note, however, that if master data is being placed in the local disk of the CEP Server, it must be stored at the standby node in the same way as it was stored at the active node.
When a status in which events can be processed is reached, stop the CEP engine and the CEP service of the standby node.
Save the RC procedures of the standby node.
Perform the same operation at the standby node as was performed at the active node.
Change and register the Cmdline resource.
Edit the Cmdline resource sample file below and register it in the cluster system, so that the CEP engine will start automatically at a cluster switch:
/opt/FJSVcep/HA/sample/SERVICE_BDCEP
Edit the sample file as shown below, and then register it in the cluster system using a PRIMECLUSTER operation.
Change the Engine-Name part in the Cmdline resource below to the CEP engine name created at the active node and standby node, and then register it.
STARTCMDE='/opt/FJSVcep/bin/cepstarteng -e Engine-Name'
To use multiple CEP engines, describe as many CEP engines as cepstarteng is to start.
Example
Description example when multiple CEP engines are to be used
(...)
STARTCMDE='/opt/FJSVcep/bin/cepstarteng -e CepEngine1'
STARTCMDE2='/opt/FJSVcep/bin/cepstarteng -e CepEngine2'
(...)
start() {
${STARTCMD} > /dev/null 2>&1
${STARTCMDE} > /dev/null 2>&1
${STARTCMDE2} > /dev/null 2>&1
}
(...)