The procedure for duplicating the network and setting up Hadoop is shown below.
Create a slave server definition file (/etc/opt/FJSVbdpp/conf/slaves) on the primary master server to define the slave servers to be connected.
Distribute the slave server definition files defined on the primary master server to the secondary master server in order to build a master server replicated configuration.
Create a slave server configuration file.
Example
Defining five slave servers; slave1, slave2, slave3, slave4, and slave5:
# cat /etc/opt/FJSVbdpp/conf/slaves <Enter> slave1,slave2,slave3,slave4,slave5
Distribute the slave server configuration file to the secondary master server. (Only when creating a master server replicated configuration)
# scp /etc/opt/FJSVbdpp/conf/slaves root@master2:/etc/opt/FJSVbdpp/conf/slaves <Enter>
Point
The settings in the slave server configuration file must be the same on the primary master server, the secondary master server, the slave servers, and the development servers.
When setting up the slave servers and development servers explained later, distribute the slave server configuration file on the master server to each of the servers.
See
Refer to "B.2 slaves" for details of the slave server definition file 'slaves'.
Replicate the network and setup Hadoop.
This must be done on the primary master server first and then the secondary master server in order to build a master server replicated configuration.
Execute bdpp_setup.
# /opt/FJSVbdpp/setup/bdpp_setup <Enter>
Note
If setup fails, refer to the messages output during the setup operation and/or setup log file (/var/opt/FJSVbdpp/log/bdpp_setup.log) to diagnose the failure. Then, remove the cause of the failure and perform setup again.
Information
Hadoop setup sets Hadoop and DFS configurations as default settings of this product. Refer to "Appendix C Hadoop Configuration Parameters" for information on how to tune the configuration settings.