If you want to use a quorum server or a Network Appliance NAS device as the quorum device, you configure it after scinstall processing is completed. After installation, you can also configure additional quorum devices by using the clsetup 1CL utility.
If your cluster configuration includes third-party shared storage devices that are not supported for use as quorum devices, you must use the clsetup utility to configure quorum manually. For other topologies, quorum devices are optional. Odd-number rule — If more than one quorum device is configured in a two-node cluster, or in a pair of nodes directly connected to the quorum device, configure an odd number of quorum devices.
This configuration ensures that the quorum devices have completely independent failure pathways. Distribution of quorum votes - For highest availability of the cluster, ensure that the total number of votes that are contributed by quorum devices is less than the total number of votes that are contributed by nodes.
Otherwise, the nodes cannot form a cluster if all quorum devices are unavailable, even if all nodes are functioning. You cannot change the SCSI protocol of a device after it is configured as a quorum device. Replicated devices — Sun Cluster software does not support replicated devices as quorum devices.
When a configured quorum device is added to a ZFS storage pool, the disk is relabeled as an EFI disk and quorum configuration information is lost. The disk can then no longer provide a quorum vote to the cluster. Once a disk is in a storage pool, you can configure that disk as a quorum device. Or, you can unconfigure the quorum device, add it to the storage pool, then reconfigure the disk as a quorum device. This section provides the following guidelines for planning global devices and for planning cluster file systems:.
Sun Cluster software does not require any specific disk layout or file system size. Consider the following points when you plan your layout for global devices. Mirroring — You must mirror all global devices for the global device to be considered highly available. You do not need to use software mirroring if the storage device provides hardware RAID as well as redundant paths to disks. Disks — When you mirror, lay out file systems so that the file systems are mirrored across disk arrays.
Availability — You must physically connect a global device to more than one node in the cluster for the global device to be considered highly available. A global device with multiple physical connections can tolerate a single-node failure.
A global device with only one physical connection is supported, but the global device becomes inaccessible from other nodes if the node with the connection is down. Non-global zones - Global devices are not directly accessible from a non-global zone. Only cluster-file-system data is accessible from a non-global zone.
Add this planning information to the Device Group Configurations Worksheet. Failover — You can configure multihost disks and properly configured volume-manager devices as failover devices. Proper configuration of a volume-manager device includes multihost disks and correct setup of the volume manager itself. This configuration ensures that multiple nodes can host the exported device. Mirroring — You must mirror the disks to protect the data from disk failure. See Mirroring Guidelines for additional guidelines.
Storage-based replication - Disks in a device group must be either all replicated or none replicated. A device group cannot use a mix of replicated and nonreplicated disks.
You can alternatively configure highly available local file systems. Quotas - Quotas are not supported on cluster file systems. However, quotas are supported on highly available local file systems. Non-global zones - If a cluster file system is to be accessed from a non-global zone, it must first be mounted in the global zone.
The cluster file system is then mounted in the non-global zone by using a loopback mount. Therefore, the loopback file system LOFS must be enabled in a cluster that contains non-global zones. You must manually disable LOFS on each cluster node if the cluster meets both of the following conditions:. If the cluster meets both of these conditions, you must disable LOFS to avoid switchover problems or other failures. If the cluster meets only one of these conditions, you can safely enable LOFS.
Process accounting log files - Do not locate process accounting log files on a cluster file system or on a highly available local file system.
A switchover would be blocked by writes to the log file, which would cause the node to hang. Use only a local file system to contain process accounting log files. Communication endpoints - The cluster file system does not support any of the file-system features of Solaris software by which one would put a communication endpoint in the file-system namespace.
Although you can create a UNIX domain socket whose name is a path name into the cluster file system, the socket would not survive a node failover. Any FIFOs or named pipes that you create on a cluster file system would not be globally accessible. Therefore, do not attempt to use the fattach command from any node other than the local node. Device special files - Neither block special files nor character special files are supported in a cluster file system.
Do not use the mknod command for this purpose. Installing applications - If you want the binaries of a highly available application to reside on a cluster file system, wait to install the application until after the cluster file system is configured. Also, if the application is installed by using the Sun Java System installer program and the application depends on any shared components, install those shared components on all nodes in the cluster that are not installed with the application.
This section describes requirements and restrictions for the following types of cluster file systems:. You can alternatively configure these and other types of file systems as highly available local file systems. Follow these guidelines to determine what mount options to use when you create your cluster file systems. This mount option is already the default value if no other onerror mount option is specified.
These mount options are not supported on cluster file systems for the following reasons:. This condition might occur if the cluster file system experiences file corruption. This condition might thereby cause applications that use the cluster file system to hang or prevent the applications from being killed.
If you specify syncdir , you are guaranteed POSIX-compliant file system behavior for the write system call. If a write succeeds, then this mount option ensures that sufficient space is on the disk. If you do not specify syncdir , the same behavior occurs that is seen with UFS file systems. When you do not specify syncdir , performance of writes that allocate disk blocks, such as when appending data to a file, can significantly improve.
However, in some cases, without syncdir you would not discover an out-of-space condition ENOSPC until you close a file. With syncdir , as with POSIX behavior, the out-of-space condition would be discovered before the close. The primary node is the node that masters the disk on which the VxFS file system resides.
This method ensures that the mount or unmount operation succeeds. A VxFS file-system mount or unmount operation that is performed from a secondary node might fail. They are, however, supported in a local file system. All other VxFS features and options that are supported in a cluster file system are supported by Sun Cluster 3.
Nesting mount points — Normally, you should not nest the mount points for cluster file systems. To ignore this rule can cause availability and node boot-order problems. These problems would occur if the parent mount point is not present when the system attempts to mount a child of that file system.
The only exception to this rule is if the devices for the two file systems have the same physical node connectivity. An example is different slices on the same disk.
This section provides the following guidelines for planning volume management of your cluster configuration:. Sun Cluster software uses volume-manager software to group disks into device groups which can then be administered as one unit.
You must install Solaris Volume Manager software on all nodes of the cluster, regardless of whether you use VxVM on some nodes to manage disks.
You are only required to install and license VxVM on those nodes that are attached to storage devices which VxVM manages. If you install both volume managers on the same node, you must use Solaris Volume Manager software to manage disks that are local to each node.
Local disks include the root disk. Use VxVM to manage all shared disks. See your volume-manager documentation and Configuring Solaris Volume Manager Software or Installing and Configuring VxVM Software for instructions about how to install and configure the volume-manager software. Consider the following general guidelines when you configure your disks with volume-manager software:. Mirrored multihost disks — You must mirror all multihost disks across disk expansion units.
See Guidelines for Mirroring Multihost Disks for guidelines on mirroring multihost disks. You do not need to use software mirroring if the storage device provides hardware RAID as well as redundant paths to devices. Mirrored root — Mirroring the root disk ensures high availability, but such mirroring is not required. See Mirroring Guidelines for guidelines about deciding whether to mirror the root disk. Node lists — To ensure high availability of a device group, make its node lists of potential masters and its failback policy identical to any associated resource group.
Or, if a scalable resource group uses more nodes or zones than its associated device group, make the scalable resource group's node list a superset of the device group's node list.
Multihost disks — You must connect, or port, all devices that are used to construct a device group to all of the nodes that are configured in the node list for that device group.
Solaris Volume Manager software can automatically check for this connection at the time that devices are added to a disk set. However, configured VxVM disk groups do not have an association to any particular set of nodes.
Hot spare disks — You can use hot spare disks to increase availability, but hot spare disks are not required. See your volume-manager documentation for disk layout recommendations and any additional restrictions. Also, the name cannot be the same as any device-ID name. Dual-string mediators — Each disk set configured with exactly two disk strings and mastered by exactly two nodes must have Solaris Volume Manager mediators configured for the disk set.
A disk string consists of a disk enclosure, its physical disks, cables from the enclosure to the node or nodes, and the interface adapter cards. Observe the following rules to configure dual-string mediators:. You must use the same two nodes for all disk sets that require mediators. Those two nodes must master those disk sets. Mediators cannot be configured for disk sets that do not meet the two-string and two-host requirements.
See the mediator 7D man page for details. With the Solaris 10 release, Solaris Volume Manager has been enhanced to configure volumes dynamically. New volumes are dynamically created, as needed. Failure to follow this guideline can result in serious Solaris Volume Manager errors and possible loss of data. Solaris Volume Manager software uses the additional disk set to manage the private disks on the local host. The maximum number of disk sets that are allowed per cluster is This number allows for 31 disk sets for general use plus one disk set for private disk management.
For example, if the highest value of the volume names that are used in the first 15 disk sets of a cluster is 10, but the highest value of the volume in the 16th disk set is , set the value of nmd to at least Also, the value of nmd must be large enough to ensure that enough numbers exist for each device—ID name. The number must also be large enough to ensure that each local volume name can be unique throughout the cluster. The highest allowed value of a volume name per disk set is The default value of nmd is Set these fields at installation time to allow for all predicted future expansion of the cluster.
To increase the value of these fields after the cluster is in production is time consuming. The value change requires a reconfiguration reboot for each node. Accessibility to nodes - You must configure all volume-manager disk groups as either Sun Cluster device groups or as local-only disk groups. If you do not configure the disk group in one of these ways, the devices in the disk group will not be accessible to any node in the cluster.
A local-only disk group functions outside the control of Sun Cluster software and can be accessed from only one node at a time. Enclosure-Based Naming — If you use Enclosure-Based Naming of devices, ensure that you use consistent device names on all cluster nodes that share the same storage. VxVM does not coordinate these names, so the administrator must ensure that VxVM assigns the same names to the same devices from different nodes.
Failure to assign consistent names does not interfere with correct cluster behavior. However, inconsistent names greatly complicate cluster administration and greatly increase the possibility of configuration errors, potentially leading to loss of data. Root disk group — The creation of a root disk group is optional.
Simple root disk groups — Simple root disk groups, which are created on a single slice of the root disk, are not supported as disk types with VxVM on Sun Cluster software. This is a general VxVM software restriction. Encapsulation — Disks to be encapsulated must have two disk-slice table entries free. Number of volumes — Estimate the maximum number of volumes any given device group can use at the time the device group is created.
If the number of volumes is or greater, you must carefully plan the way in which minor numbers are assigned to device group volumes. No two device groups can have overlapping minor number assignments. The use of DMP is supported only in the following configurations:. Sun Cluster software supports the following choices of file-system logging:.
This section provides the following guidelines for planning the mirroring of your cluster configuration:. To mirror all multihost disks in a Sun Cluster configuration enables the configuration to tolerate single-device failures. Sun Cluster software requires that you mirror all multihost disks across expansion units.
Separate disk expansion units — Each submirror of a given mirror or plex should reside in a different multihost expansion unit. However, Sun Cluster software requires only two-way mirroring. Differing device sizes — If you mirror to a device of a different size, your mirror capacity is limited to the size of the smallest submirror or plex. Under VxVM, you encapsulate the root disk and mirror the generated subdisks. However, Sun Cluster software does not require that you mirror the root disk.
Before you decide whether to mirror the root disk, consider the risks, complexity, cost, and service time for the various alternatives that concern the root disk. No single mirroring strategy works for all configurations. You might want to consider your local Sun service representative's preferred solution when you decide whether to mirror root.
Boot disk — You can set up the mirror to be a bootable root disk. You can then boot from the mirror if the primary boot disk fails. Complexity — To mirror the root disk adds complexity to system administration. To mirror the root disk also complicates booting in single-user mode. Hai, I am trying to install rock4. Gurus, I have several questions : 1.
Currently I am using WebMin. It worked fine, however I am very curious to use the tools provided by Sun Microsystem. Please advise for package name and how to activate Any comparison chart is highly appreciated. Regards, RAA 4 Replies. Red Hat. RedHat Commands. OpenSolaris Commands. Linux Commands. SunOS Commands. FreeBSD Commands. Full Man Repository. Advanced Search. Contact Us. Forum Rules. For any zone cluster that is already configured in the cluster, the private IP subnets and the corresponding private IP addresses that are allocated for that zone cluster will also be updated.
If you specify a private-network address other than the default, the address must meet the following requirements:. Address and netmask sizes — The private network address cannot be smaller than the netmask.
For example, you can use a private network address of But you cannot use a private network address of Acceptable addresses — The address must be included in the block of addresses that RFC reserves for use in private networks. Use in multiple clusters — You can use the same private-network address in more than one cluster, provided that the clusters are on different private networks. Private IP network addresses are not accessible from outside the physical cluster. For Sun Logical Domains LDoms guest domains that are created on the same physical machine and that are connected to the same virtual switch, the private network is shared by such guest domains and is visible to all these domains.
Proceed with caution before you specify a private-network IP address range to the scinstall utility for use by a cluster of guest domains. Ensure that the address range is not already in use by another guest domain that exists on the same physical machine and shares its virtual switch.
The system does configure IPv6 addresses on the private-network adapters to support scalable services that use IPv6 addresses.
But internode communication on the private network does not use these IPv6 addresses. The private hostname is the name that is used for internode communication over the private-network interface. Private hostnames are automatically created during Sun Cluster configuration of a global cluster or a zone cluster. These private hostnames follow the naming convention clusternode nodeid -priv , where nodeid is the numeral of the internal node ID.
During Sun Cluster configuration, the node ID number is automatically assigned to each voting node when the node becomes a cluster member. A voting node of the global cluster and a node of a zone cluster can both have the same private hostname, but each hostname resolves to a different private-network IP address.
After a global cluster is configured, you can rename its private hostnames by using the clsetup 1CL utility. Currently, you cannot rename the private hostname of a zone-cluster node. For the Solaris 10 OS, the creation of a private hostname for a non-global zone is optional.
There is no required naming convention for the private hostname of a non-global zone. The cluster interconnects provide the hardware pathways for private-network communication between cluster nodes. Each interconnect consists of a cable that is connected in one of the following ways:.
You do not need to configure a cluster interconnect for a single-host cluster. However, if you anticipate eventually adding more voting nodes to a single-host cluster configuration, you might want to configure the cluster interconnect for future use. During Sun Cluster configuration, you specify configuration information for one or two cluster interconnects. If the number of available adapter ports is limited, you can use tagged VLANs to share the same adapter with both the private and public network.
You can set up from one to six cluster interconnects in a cluster. While a single cluster interconnect reduces the number of adapter ports that are used for the private interconnect, it provides no redundancy and less availability.
If a single interconnect fails, the cluster is at a higher risk of having to perform automatic recovery. Whenever possible, install two or more cluster interconnects to provide redundancy and scalability, and therefore higher availability, by avoiding a single point of failure. You can configure additional cluster interconnects, up to six interconnects total, after the cluster is established by using the clsetup 1CL utility.
For guidelines about cluster interconnect hardware, see Interconnect Requirements and Restrictions in Sun Cluster 3. For the transport adapters, such as ports on network interfaces, specify the transport adapter names and transport type.
If your configuration is a two-host cluster, you also specify whether your interconnect is a point-to-point connection adapter to adapter or uses a transport switch. Link-local IPv6 addresses, which are required on private-network adapters to support IPv6 public-network addresses, are derived from the local MAC addresses. Specify the usual adapter name, which is the device name plus the instance number or physical point of attachment PPA.
For example, the name of instance 2 of a Cassini Gigabit Ethernet adapter would be ce2. If the scinstall utility asks whether the adapter is part of a shared virtual LAN, answer yes and specify the adapter's VID number. Specify the adapter by its VLAN virtual device name.
This name is composed of the adapter name plus the VLAN instance number. You would therefore specify the adapter name as ce to indicate that it is part of a shared virtual LAN. Logical network interfaces — Logical network interfaces are reserved for use by Sun Cluster software. If you use transport switches, such as a network switch, specify a transport switch name for each interconnect.
You can use the default name switch N , where N is a number that is automatically assigned during configuration, or create another name.
Also specify the switch port name or accept the default name. The default port name is the same as the internal node ID number of the Solaris host that hosts the adapter end of the cable. Clusters with three or more voting nodes must use transport switches. Direct connection between voting cluster nodes is supported only for two-host clusters.
If your two-host cluster is direct connected, you can still specify a transport switch for the interconnect. If you specify a transport switch, you can more easily add another voting node to the cluster in the future. Fencing is a mechanism that is used by the cluster to protect the data integrity of a shared disk during split-brain situations.
By default, the scinstall utility in Typical Mode leaves global fencing enabled, and each shared disk in the configuration uses the default global fencing setting of pathcount. With the pathcount setting, the fencing protocol for each shared disk is chosen based on the number of DID paths that are attached to the disk. In Custom Mode, the scinstall utility prompts you whether to disable global fencing. For most situations, respond No to keep global fencing enabled. However, you can disable global fencing to support the following situations:.
If you disable fencing under other situations than the following, your data might be vulnerable to corruption during application failover.
Examine this data corruption possibility carefully when you consider turning off fencing. If you turn off fencing for a shared disk that you then configure as a quorum device, the device uses the software quorum protocol.
You want to enable systems that are outside the cluster to gain access to storage that is attached to the cluster. If you disable global fencing during cluster configuration, fencing is turned off for all shared disks in the cluster.
After the cluster is configured, you can change the global fencing protocol or override the fencing protocol of individual shared disks. However, to change the fencing protocol of a quorum device, you must first unconfigure the quorum device.
Then set the new fencing protocol of the disk and reconfigure it as a quorum device. For more information about setting the fencing protocol of individual shared disks, see the cldevice 1CL man page. For more information about the global fencing setting, see the cluster 1CL man page. Sun Cluster configurations use quorum devices to maintain data and resource integrity.
If the cluster temporarily loses connection to a voting node, the quorum device prevents amnesia or split-brain problems when the voting cluster node attempts to rejoin the cluster. During Sun Cluster installation of a two-host cluster, you can choose to let the scinstall utility automatically configure as a quorum device an available shared disk in the configuration.
Shared disks include any Sun NAS device that is configured for use as a shared disk. The scinstall utility assumes that all available shared disks are supported as quorum devices. If you want to use a quorum server or a Network Appliance NAS device as the quorum device, you configure it after scinstall processing is completed.
After installation, you can also configure additional quorum devices by using the clsetup 1CL utility. If your cluster configuration includes third-party shared storage devices that are not supported for use as quorum devices, you must use the clsetup utility to configure quorum manually.
Minimum — A two-host cluster must have at least one quorum device, which can be a shared disk, a quorum server, or a NAS device. For other topologies, quorum devices are optional. Odd-number rule — If more than one quorum device is configured in a two-host cluster, or in a pair of hosts directly connected to the quorum device, configure an odd number of quorum devices. This configuration ensures that the quorum devices have completely independent failure pathways.
Distribution of quorum votes — For highest availability of the cluster, ensure that the total number of votes that are contributed by quorum devices is less than the total number of votes that are contributed by voting nodes. Otherwise, the nodes cannot form a cluster if all quorum devices are unavailable, even if all nodes are functioning.
Connection — You must connect a quorum device to at least two voting nodes. Changing the fencing protocol of quorum devices — For SCSI disks that are configured as a quorum device, you must unconfigure the quorum device before you can enable or disable its SCSI fencing protocol.
You must disable fencing for such disks. The software quorum protocol would also be used by SCSI shared disks if fencing is disabled for such disks. Replicated devices — Sun Cluster software does not support replicated devices as quorum devices. When a configured quorum device is added to a ZFS storage pool, the disk is relabeled as an EFI disk and quorum configuration information is lost.
The disk can then no longer provide a quorum vote to the cluster. After a disk is in a storage pool, you can configure that disk as a quorum device. Or, you can unconfigure the quorum device, add it to the storage pool, then reconfigure the disk as a quorum device. On the Solaris 10 OS, a zone cluster is a cluster of non-global zones.
0コメント