A master obtained by removing server-specific information (system node names and IP addresses) from the contents of a system disk. The clone master is copied to a virtual server system disk when a virtual server is deployed.
Cluster Interconnect Protocol (CIP) LAN
The network that performs heartbeat monitoring between primary and secondary master servers in an HA cluster configuration.
An existing server that accesses data files by using a standard Linux file access interface.
Name of the nodes that comprise a cluster in the HDFS file system used by Hadoop. Big data processed by Hadoop is distributed and replicated in block units and mapped on DataNodes.
The server that develops and executes applications (MapReduce) that perform parallel distribution.
The name of technology that performs efficient distribution and parallel processing for big data. Its broad components are the HDFS distributed file system and the MapReduce parallel distributed processing technology.
HDFS (Hadoop Distributed File System)
The distributed file system used in Hadoop. HDFS maps big data that is distributed and replicated in block units on multiple nodes, called DataNodes, and this mapping is managed by a node called a NameNode.
Information expressing the structure of a virtual image.
Image information is required when constructing a virtual image. Separate image information is required for creating each virtual image.
The parallel distributed processing technology at the core of Hadoop. MapReduce performs parallel processing of distributed data and compiles/consolidates the processing results. Its broad components are the TaskTracker that is in charge of the processing at each cluster, and the JobTracker that manages the overall processing and allocates processing to the TaskTracker.
The server that generates blocks from data files and centrally manages them. It receives requests to execute jobs for analysis processing, and performs parallel distributed processing on slave servers.
The network for parallel distributed processing tasks between master servers and slave servers.
A single part which, if it fails, will be fatal to the entire system, is called a single point of failure. With HDFS, if the NameNode that manages all DataNodes fails, HDFS will be unusable. Therefore, the NameNode is a single point of failure for HDFS.
The server that accesses blocks of data files. Analysis processing can be performed in a short period of time through parallel distributed processing by using multiple slave servers.
The services and applications that underpin the communication-based society that results from all sorts of people exchanging and sharing a variety of content (such as text, voice, and videos) via the Internet are called social media to differentiate these from older information media (such as newspaper and television).