Top
Interstage Big DataParallel Processing ServerV1.0.0 User's Guide
Interstage

3.1.6 Checking the Connection to the First Slave Server

Before building the second and subsequent slave servers, check that the first already installed slave server can connect correctly to the master server.

  1. If not logged in to the master server (primary), log in with root permissions.

  2. Backup the slave server definition file (/etc/opt/FJSVbdpp/conf/slaves) so that it can be edited to check the connection of the first slave server.

    # cp /etc/opt/FJSVbdpp/conf/slaves /etc/opt/FJSVbdpp/conf/slaves.bak
  3. Edit the slave server definition file, and leaving only the slave server that is to be checked for connection, delete the other slave servers.

    Before editing:

    # cat /etc/opt/FJSVbdpp/conf/slaves
    slaves1,slaves2,slaves3,slaves4,slaves5

    After editing:

    # cat /etc/opt/FJSVbdpp/conf/slaves
    slaves1
  4. Execute bdpp_changeslaves command to effect the changes that were made to the slave server definition file.
    Refer to "A.1.3 bdpp_changeslaves" for information on the bdpp_changeslaves command.

    # /opt/FJSVbdpp/bin/bdpp_changeslaves <Enter>
  5. Start the Hadoop of the Interstage Big Data Parallel Processing Server.
    Log in to the master server with root permissions and use the bdpp_start command to start the Hadoop of the Interstage Big Data Parallel Processing Server.
    Refer to "A.1.11 bdpp_start" for details of the bdpp_start command.

    # /opt/FJSVbdpp/bin/bdpp_start
  6. Use the Interstage Big Data Parallel Processing Server status display to check whether or not TaskTracker is started at the targeted slave server.
    After checking that the system for the slave server is running, execute the bdpp_stat command on the master server to display the status of the Hadoop of the Interstage Big Data Parallel Processing Server.
    Refer to "A.1.11 bdpp_start" for details of the bdpp_stat command.

    # /opt/FJSVbdpp/bin/bdpp_stat -all
    cluster mapred  30595   jobtracker
    slave1  mapred  11381   tasktracker          <-- Check that TaskTracker is started at slave1.
    bdpp: INFO : 003: bdpp Hadoop JobTracker is alive.
  7. Run the Hadoop job to check operation.
    Use the sample that came with Hadoop (teragen, terasort, etc.) to check the operation of the Hadoop job.
    Refer to "5.3 Executing and Stopping Jobs" for information on executing Hadoop jobs.

  8. Stop the Hadoop of the Interstage Big Data Parallel Processing Server.
    Use the bdpp_stop command to stop the Hadoop of the Interstage Big Data Parallel Processing Server.
    Refer to "A.1.13 bdpp_stop" for details of the bdpp_stop command.

    # /opt/FJSVbdpp/bin/bdpp_stop
  9. After checking the connection between the master server and the first slave server, restore the backup of the slave server definition file.

    # rm -fr /etc/opt/FJSVbdpp/conf/slaves
    # cp /etc/opt/FJSVbdpp/conf/slaves.bak /etc/opt/FJSVbdpp/conf/slaves
  10. Execute bdpp_changeslaves command to effect the changes that were made to the slave server definition file again.

    # /opt/FJSVbdpp/bin/bdpp_changeslaves <Enter>

    Note

    When starting or stopping the Hadoop of the Interstage Big Data Parallel Processing Server, a password input prompt will be displayed because Hadoop is started and stopped via SSH.

    To reduce the workload by avoiding password input, distribute the public key of the SSH of the root user of the master server to the slave server and development server so that SSH without a password can be performed. Refer to the ssh-keygen command help for details.