GFS kernel module on each node that communicates with the Meta Data Server and provides simultaneous access to a shared file system.
See also Meta Data Server.
In PRIMECLUSTER configurations, an Administrative LAN is a private local area network (LAN) on which machines such as the System Console and Cluster Console reside. Because normal users do not have access to the Administrative LAN, it provides an extra level of security. The use of an Administrative LAN is optional.
See also public LAN.
A resource categorized as userApplication used to group resources into a logical collection.
A shared boundary between a service provider and the application that uses that service.
A predefined group of object definition value choices used by RMS Wizard kit to create object definitions for a specific type of application.
The part of an object definition that specifies how the base monitor acts and reacts for a particular object type during normal operations.
Function that automatically recognizes the physical connection configuration of shared disk units and registers the units to the resource database.
This function is provided by the Enhanced Support Facility (ESF), and it automatically switches the power of the server on and off.
The procedure by which RMS automatically switches control of userApplication over to another host after specified conditions are detected.
See also directed switchover, failover, switchover, and symmetrical switchover.
Availability describes the need of most enterprises to operate applications via the Internet 24 hours a day, 7 days a week. The relationship of the actual to the planned usage time determines the availability of a system.
This PRIMECLUSTER module resides on top of the basic OS and provides internal interfaces for the CF (Cluster Foundation) functions that the PRIMECLUSTER services use in the layer above.
See also Cluster Foundation.
The RMS module that maintains the availability of resources. The base monitor is supported by daemons and detectors. Each host being monitored has its own copy of the base monitor
The improved interprocess communication interface in Oracle 9i that allows logical disk blocks (buffers) to be cached in the local memory of each node. Thus, instead of having to flush a block to disk when an update is required, the block can be copied to another node by passing a message on the interconnect, thereby removing the physical I/O overhead.
The environment configuration file that is used for backup and restore operations, and is placed in the "/opt/SMAW/ccbr" directory. This file is used in the "$CCBRHOME" variable setting. For details, see the manual pages for the "cfbackup(1M)" and "cfrestore(1M)" commands and the comments in the "ccbr.conf" file.
The file that stores the generation number and is placed in the "/opt/SMAW/ccbr" directory. A value of 0 or higher is stored in this file. For details, see the manual pages for the "cfbackup(1M)" and "cfrestore(1M)" commands.
The variable that identifies the directory in which backup data is stored. The initial value is the "/var/spool/pcl4.1/ccbr" directory. This variable can be set only in the "ccbr.conf" file.
A resource defined in the configuration file that has at least one parent. A child can have multiple parents, and can either have children itself (making it also a parent) or no children (making it a leaf object).
See also resource, object, parent, and leaf object.
A set of computers that work together as a single computing source. Specifically, a cluster performs a distributed form of parallel computing.
See also RMS configuration.
Cluster Configuration Backup and Restore (CF)
CCBR provides a simple method to save the current PRIMECLUSTER configuration information of a cluster node. It also provides a method to restore the configuration information.
The set of PRIMECLUSTER modules that provides basic clustering communication services.
See also base cluster foundation.
The set of private network connections used exclusively for PRIMECLUSTER communications.
This PRIMECLUSTER module handles the forming of a new cluster and the addition of nodes.
Cluster Resource Management facility
Facility that manages hardware units that are shared among multiple nodes.
The operation which does not allow the preliminary operation needed to establish the operating state immediately on the standby node.
Concatenated virtual disks consist of two or more pieces on one or more disk drives. They correspond to the sum of their parts. Unlike simple virtual disks where the disk is subdivided into small pieces, the individual disks or partitions are combined to form a single large logical disk. (Applies to transitioning users of existing Fujitsu Technology Solutions products only.)
See also mirror virtual disk, simple virtual disk, striped virtual disk, virtual disk.
The linking of multiple physical disks. This setup allows multiple disks to be used as one virtual disk that has a large capacity.
The RMS configuration file that defines the monitored resources and establishes the interdependencies between them. The default name of this file is config.us.
The Console Break agent is used for the Shutdown Facility to eliminate a node by sending a break signal from RCCU.
Domain in which the Oracle VM Server for SPARC is installed. All platforms that are using Oracle VM Server for SPARC must contain a control domain. By using the ldm command within this domain, other domains can be created and controlled.
A process that monitors the state of a specific object type and reports a change in the resource state to the base monitor.
The RMS procedure by which an administrator switches control of userApplication over to another host.
See also automatic switchover, failover, switchover, and symmetrical switchover.
Collection of SDX objects. The shared type disk class is also a resource unit that can be used by the PRIMECLUSTER system. A disk class is sometimes simply called a "class."
A collection of disks or low-order groups that become the unit for mirroring, striping, or concatenation. Disk and low-order groups that belong to the same disk group are mutually mirrored, striped, or concatenated according to the type attribute (mirror, stripe, or concatenation) of that disk group.
A disk group is sometimes simply called a "group."
A set of one or more system boards that function as an independent system. While the server is shared, an operating system can be installed in each domain to enable each domain to operate as an independent system.
Each domain consists of a logical system board assigned to it. Each domain is electrically insulated by each hardware partition. Therefore, if one domain fails, it does not affect the other domains in the server.
A node state that indicates that the node is unavailable (marked as down). A LEFTCLUSTER node must be marked as DOWN before it can rejoin a cluster.
See also UP, LEFTCLUSTER, node state.
The process of detecting an error. For RMS, this includes initiating a log entry, sending a message to a log file, or making an appropriate recovery response.
LAN standard that is standardized by IEEE 802.3. Currently, except for special uses, nearly all LANs are Ethernets. Originally the expression Ethernet was a LAN standard name for a 10 megabyte per second type LAN, but now it is also used as a general term that includes high-speed Ethernets and gigabyte Ethernets.
Event Notification Services (CF)
This PRIMECLUSTER module provides an atomic-broadcast facility for events.
One of the LAN duplexing modes presented by GLS.
This mode uses a multiplexed LAN simultaneously to provide enhanced communication scalability between Solaris servers and high-speed switchover if a LAN failure occurs.
A network with the ability to withstand faults (fault tolerant). Fault tolerant is the ability to maintain and continue normal operation even if a fault occurs in part of the computer system. A fault tolerant network is therefore a network that can continue normal communication even if a flat occurs in part of the network system.
Data generation management is enabled in the PRIMECLUSTER backup and restore operations. The current generation number is added as part of the backup and restore data name. Integers of 0 or higher are used as generation numbers, and the generation number is incremented each time backup is successful. The generation number is stored in the "ccbr.gen" file and can be specified as an optional argument in the "cfbackup(1M)" and "cfrestore(1M) " commands.
For details, see the manual pages for the "cfbackup(1M)" and "cfrestore(1M)" commands.
An object type which has generic properties. A generic type is used to customize RMS for monitoring resources that cannot be assigned to one of the supplied object types.
See also object type.
A shared file system that allows simultaneous access from multiple Solaris systems that are connected to shared disk units, while maintaining data consistency, and allows processing performed by a node to be continued by other nodes even if the first node fails.
A GFS shared file system can be mounted and used concurrently from multiple nodes.
This optional product provides volume management that improves the availability and manageability of information stored on the disk unit of the Storage Area Network (SAN).
This optional product provides direct, simultaneous accessing of the file system on the shared storage unit from two or more nodes within a cluster.
This PRIMECLUSTER optional module provides network high availability solutions by multiplying a network route.
A computer interface with windows, icons, toolbars, and pull-down menus that is designed to be simpler to use than the command-line interface.
One of the LAN duplexing modes presented by GLS.
This mode uses a duplexed LAN simultaneously and high reliance communication with Global server or SURE system is realized.
Virtualized hardware environment in which an independent operating system is running. It can be started and stopped without any influence on other domains.
This concept applies to the use of redundant resources to avoid single points of failure.
Group that does not belong to another group. A volume can be created in the highest-order group.
The operation which enables preliminary operation so that the operating state can be established immediately on the standby node.
System having a Solaris CD image on the disk or CD-ROM drive to distribute the Solaris CD image to other systems over the network.
A numeric address that can be assigned to computers or applications.
See also IP aliasing.
internode communication facility
Communication function between cluster nodes that are used by PRIMECLUSTER CF. Since this facility is designed especially for communication between cluster nodes, the overhead is less than that of TCP/IP, and datagram communication services that also guarantee the message arrival sequence can be carried out.
A domain in an Oracle VM Server for SPARC Environment that is allocated only the PCle end point device, which is managed by the control domain through the Direct I/O function.
The logical domain which holds a physical I/O device in Oracle VM Server for SPARC Environments. This holds one or more root complex. (I/O root domains exceeding the number of root complex within a partition cannot be created.)
This enables several IP addresses (aliases) to be allocated to one physical network interface. With IP aliasing, the user can continue communicating with the same IP address, even though the application is now running on another host.
See also Internet Protocol address.
A word that has special meaning in a programming language. For example, in the configuration file, the keyword node identifies the kind of definition that follows.
Time interval from when a data transmission request is issued until the actual response is received.
A bottom object in a system graph. In the configuration file, this object definition is at the beginning of the file. A leaf object does not have children.
A node state that indicates that the node cannot communicate with other nodes in the cluster. That is, the node has left the cluster. The purpose for the intermediate LEFTCLUSTER state is to avoid the network partition problem.
See also UP, DOWN, network partition, node state.
line switching unit (only in Oracle Solaris 10 environment)
This device connects external lines to more than one node and switches the connected nodes by the RCI.
MAC address that the system administrator of a local area network (LAN) system guarantees to be unique within that system.
The file that contains a record of significant system events or messages. The base monitor, wizards, and detectors can have their own log files.
General term for a virtual disk device that the user can access directly. The user can access a logical volume in the same way as accessing a physical disk slice (partition). A logical volume is sometimes simply called a "volume."
Group that belongs to another group. A volume cannot be created in a low-order group.
Address that identifies the office or node that is used by the MAC sublayer of a local area network (LAN).
GFS daemon that centrally manages the control information of a file system (meta-data).
A volume that is created in a mirror group. Data redundancy is created by mirroring.
A disk group of the mirror type. This a collection of mutually mirrored disks or low-order groups.
A setup that maintains redundancy by writing the same data to multiple slices. Even if an error occurs in some of the slices, this setup allows access to the volume to continue as long as a normal slice remains.
Mirror virtual disks consist of two or more physical devices, and all output operations are performed simultaneously on all of the devices. (Applies to transitioning users of existing Fujitsu Technology Solutions products only.)
See also concatenated virtual disk, simple virtual disk, striped virtual disk, and virtual disk.
A cluster system that is built from different SPARC Enterprise models. For example, one node is a SPARC Enterprise M3000 machine, and another node is a SPARC Enterprise M4000 machine.
The models are divided into four groups, which are represented by the SPARC T3-1/T3-2/T3-4 machines, SPARC Enterprise T1000/T2000 machines, SPARC Enterprise T5120/T5220/T5140/T5240/T5440 machines, and the SPARC Enterprise M3000/M4000/M5000/M8000/M9000 machines.
Component that monitors the state of a remote cluster node and immediately detects if that node goes down. This component is separate from the SA function.
Same disk via multiple controllers. (Applies to transitioning users of existing Fujitsu Technology Solutions products only.)
The part of an operating system that is always active and translates system calls into activities.
This condition exists when two or more nodes in a cluster cannot communicate over the interconnect; however, with applications still running, the nodes can continue to read and write to a shared device, compromising data integrity.
One of the LAN duplexing modes presented by GLS. The duplexed NIC is used exclusively, and LAN monitoring between the Solaris server and the switching HUB, and switchover if an error is detected are implemented.
Every node in a cluster maintains a local state for every other node in that cluster. The node state of every node in the cluster must be either UP, DOWN, or LEFTCLUSTER.
See also UP, DOWN, LEFTCLUSTER.
In the configuration file or a system graph, this is a representation of a physical or virtual resource.
See also leaf object, object definition, node state, object type.
An entry in the configuration file that identifies a resource to be monitored by RMS. Attributes included in the definition specify properties of the corresponding resource. The keyword associated with an object definition is object.
See also attribute, object type.
A category of similar resources monitored as a group, such as disk drives. Each object type has specific properties, or attributes, which limit or define what monitoring or action can occur. When a resource is associated with a particular object type, attributes associated with that object type are applied to the resource.
See also generic type.
The capability of adding, removing, replacing, or recovering devices without shutting or powering off the host.
operating system dependent (CF)
This module provides an interface between the native operating system and the abstract, OS-independent interface that all PRIMECLUSTER modules depend upon.
Oracle Parallel Server allows access to all data in the database to users and applications in a clustered or MPP (massively parallel processing) platform.
Virtualization function using Hypervisor, which is provided as part of the firmware.
OSLC (Oracle Solaris Legacy Containers)
A virtualization function to migrate Oracle Solaris 8/9 environment to hardware on which Solaris 10 is installed.
An object in the configuration file or system graph that has at least one child.
See also child, configuration file, and system graph.
IP address that is assigned directory to the interface (for example, hme0) of a network interface card. See also logical IP address. For information about the logical interface, see the explanation of logical interface in ifconfig(1M).
The default host on which a user application comes online when RMS is started. This is always the hostname of the first child listed in the userApplication object definition.
Service modules that provide services and internal interfaces for clustered applications.
Private network addresses are a reserved range of IP addresses specified by RFC1918. They may be used internally by any organization but, because different organizations can use the same addresses, they should never be made visible to the public internet.
A resource accessible only by a single host and not accessible to other RMS hosts.
See also resource, shared resource.
The local area network (LAN) by which normal users access a machine.
See also Administrative LAN.
State in which integrity is maintained among the nodes that configure the cluster system. Specifically, the CF state in all nodes that configure the cluster system is either UP or DOWN (there is no LEFTCLUSTER node).
This is the capability of one object to assume the resource load of any other object in a cluster, and the capability of RAID hardware and/or RAID software to replicate data stored on secondary storage devices.
Reliant Monitor Services (RMS)
The package that maintains high availability of user-specified resources by providing monitoring and switchover capabilities.
remote console connection unit
Device that converts an RS232C interface and a LAN interface. This device allows another device (personal computer) that is connected to the LAN to use the TTY console functions through the Telnet function.
A message that a detector uses to report the state of a particular resource to the base monitor.
A hardware or software element (private or shared) that provides a function, such as a mirrored disk, mirrored disk pieces, or a database server. A local resource is monitored only by the local host.
See also private resource, shared resource.
Database that manages information on hardware units that are shared among multiple nodes.
The resource database is managed by the cluster resource management facility.
A configuration in which two or more nodes are connected to shared resources. Each node has its own copy of operating system and RMS software, as well as its own applications.
Each component of the RMS Wizard Kit adds new menu items to the RMS Wizard Tools for a specific application.
See also RMS Wizard Tools, Reliant Monitor Services (RMS).
A software package composed of various configuration and administration tools used to create and manage applications in an RMS configuration.
See also RMS Wizard kit, Reliant Monitor Services.
In the PRIMECLUSTER Concepts Guide, this term refers to the individual network paths of the redundant cluster interfaces that connect the nodes to each other.
Update method used to fix an application or maintenance within the cluster system. Fix application is enabled by applying fixes to each node sequentially without stopping jobs.
The ability of a computing system to dynamically handle any increase in work load. Scalability is especially important for Internet-based applications where growth caused by Internet usage presents a scalable challenge.
A shell program executed by the base monitor in response to a state transition in a resource. The script may cause the state of a resource to change.
General term for disks that GDS manages. Depending on its use, an SDX disk may be called a single disk, a keep disk, a spare disk, or an undefined disk. An SDX disk is sometimes simply called a "disk."
General term for resources that GDS manages. The resources include classes, groups, SDX disks, and volumes.
shared disk connection confirmation
Function that checks whether that all shared disk units are turned on and all cable connections are correct when a node is started.
A resource, such as a disk drive, that is accessible to more than one node.
See also private resource, resource.
A facility that forcibly stops a node in which a failure has occurred. When PRIMECLUSTER decides that system has reached a state in which the quorum is not maintained, it uses the Shutdown Facility (SF) to return the cluster system to the quorum state.
Simple virtual disks define either an area within a physical disk partition or an entire partition.
See also concatenated virtual disk, striped virtual disk, and virtual disk.
SDX disk that does not belong to a group and can be used to create a single volume.
A volume that is created in a single disk that does not belong to a group. There is no data redundancy.
The state transition procedure receives a state transition instruction from the cluster control and controls activation and deactivation of the resource (start and stop of the application).
The high-speed network that connects multiple, external storage units and storage units with multiple computers. The connections are generally fiber channels.
A disk group of the stripe type. This is a collection of disks or low-order groups that become striping units.
Striped virtual disks consist of two or more pieces. These can be physical partitions or further virtual disks (typically a mirror disk). Sequential I/O operations on the virtual disk can be converted to I/O operations on two or more physical disks. This corresponds to RAID Level 0 (RAID0).
See also concatenated virtual disk, mirror virtual disk, simple virtual disk, virtual disk.
A volume that is created in a striped group. Striping allows the I/O load to be distributed among multiple disks. There is no data redundancy.
Dividing data into fixed-size segments, and cyclically distributing and writing the data segments to multiple slices. This method distributes I/O data to multiple physical disks and issues I/O data at the same time.
LAN duplexing mode presented by GLS.
There is a total of five switching mode types: fast switching mode, NIC switching mode, GS/SURE linkage mode, multipath mode, and multilink Ethernet mode:
The process by which a user application transfers processes and data inherited from an operating node to a standby node, based on a user request.
The process by which RMS switches control of userApplication over from one monitored host to another.
See also automatic switchover, directed switchover, failover, and symmetrical switchover.
This means that every RMS host is able to take on resources from any other RMS host.
See also automatic switchover, directed switchover, failover, and switchover.
When the power of one node is turned in the cluster system, this function turns on all other powered-off nodes and disk array unit that are connected to nodes through RCI cables.
The disk in which the operating Solaris is installed. This term refers to the entire disk, including slices that are currently operating as one of the following file systems or swap area:
/, /usr, /var, or swap area
A visual representation (a map) of monitored resources used to develop or interpret the configuration file.
See also configuration file.
A node state that indicates that the node can communicate with other nodes in the cluster.
See also DOWN, LEFTCLUSTER, node state.
A group that limits the environment setup, operation management, and other operations presented by Web-Based Admin View and the Cluster Admin GUI. There are four user groups: wvroot, clroot, cladmin, and clmon. Each user ID is registered in an appropriate user group by the operation system administrator of the management server.
With virtual disks, a pseudo device driver is inserted between the highest level of the Solaris logical Input/Output (I/O) system and the physical device driver. This pseudo device driver then maps all logical I/O requests on physical disks.
See also concatenated virtual disk, mirror virtual disk, simple virtual disk, striped virtual disk.
In Oracle Solaris Zones environments, with the non-global zones started up on both the operating server and standby server as is, this operation switches over only the applications operating within the non-global zone, and takes over services. Since the standby system's non-global zone OS enters a startup status, a faster switchover than the cold-standby is possible.
This is a common base enabling use of the Graphic User Interface of PRIMECLUSTER. This interface is in Java.
An interactive software tool that creates a specific type of application using pretested object definitions. An enabler is a type of wizard.
Abbreviation for eXtended System Control Facility. XSCF is a system monitoring facility that consists of dedicated processors that are independent from a main CPU. XSCF performs integrated management of the cooling system (FAN unit), power supply unit, system monitoring, and power on/off and monitoring system of peripherals. This is enabled from remote places, providing functions to monitor a main unit, notify a system administrator of a system failure, and perform console input/output from remote places via serial port or Ethernet port.