Veritas Cluster Server (VCS) HOWTO: =================================== $Id: VCS-HOWTO,v 1.15 2000/10/03 05:05:00 pzi Exp $ Copyright (c) Peter Ziobrzynski, pzi@pzi.net Contents: --------- - Copyright - Thanks - Overview - VCS installation - Summary of cluster queries - Summary of basic cluster operations - Changing cluster configuration - Configuration of a test group and test resource type - Installation of a test agent for a test resource - Home directories service group configuration - NIS service groups configuration - Time synchronization services - ClearCase configuration Copyright: ---------- This HOWTO document may be reproduced and distributed in whole or in part, in any medium physical or electronic, as long as this copyright notice is retained on all copies. Commercial redistribution is allowed and encouraged; however, the author would like to be notified of any such distributions. All translations, derivative works, or aggregate works incorporating any this HOWTO document must be covered under this copyright notice. That is, you may not produce a derivative work from a HOWTO and impose additional restrictions on its distribution. Exceptions to these rules may be granted under certain conditions. In short, I wish to promote dissemination of this information through as many channels as possible. However, I do wish to retain copyright on this HOWTO document, and would like to be notified of any plans to redistribute the HOWTO. If you have questions, please contact me: Peter Ziobrzynski Thanks: ------- - Veritas Software provided numerous consultations that lead to the cluster configuration described in this document. - Parts of this document are based on the work I have done for Kestrel Solutions, Inc. - Basis Inc. for assisting in selecting hardware components and help in resolving installation problems. - comp.sys.sun.admin Usenet community. Overview: --------- This document describes the configuration of a two or more node Solaris Cluster using Veritas Cluster Server VCS 1.1.2 on Solaris 2.6. Number of standard UNIX services are configured as Cluster Service Groups: user home directories, NIS naming services, time synchronization (NTP). In addition a popular Software Configuration Management system from Rational - ClearCase is configured as a set of cluster service groups. Configuration of various software components in the form of a cluster Service Group allows for high availability of the application as well as load balancing (fail-over or switch-over). Beside that cluster configuration allows to free a node in the network for upgrades, testing or reconfiguration and then bring it back to service very quickly with little or no additional work. - Cluster topology. The cluster topology used here is called clustered pairs. Two nodes share disk on a single shared SCSI bus. Both computers and the disk are connected in a chain on a SCSI bus. Both differential or fast-wide SCSI buses can be used. Each SCSI host adapter in each node is assigned different SCSI id (called initiator id) so both computers can coexist on the same bus. + Two Node Cluster with single disk: Node Node | / | / | / | / |/ Disk A single shared disk can be replaced by two disks each on its private SCSI bus connecting both cluster nodes. This allows for disk mirroring across disks and SCSI buses. Note: the disk here can be understood as disk array or a disk pack. + Two Node Cluster with disk pair: Node Node |\ /| | \ / | | \ | | / \ | |/ \| Disk Disk Single pair can be extended by chaining additional node and connecting it to the pair by additional disks and SCSI buses. One or more nodes can be added creating N node configuration. The perimeter nodes have two SCSI host adapters while the middle nodes have four. + Three Node Cluster: Node Node Node |\ /| |\ /| | \ / | | \ / | | \ | | \ | | / \ | | / \ | |/ \| |/ \| Disk Disk Disk Disk + N Node Cluster: Node Node Node Node |\ /| |\ /|\ /| | \ / | | \ / | \ / | | \ | | \ | ...\ | | / \ | | / \ | / \ | |/ \| |/ \|/ \| Disk Disk Disk Disk Disk - Disk configuration. Management of the shared storage of the cluster is performed with the Veritas Volume Manager (VM). The VM controls which disks on the shared SCSI bus are assigned (owned) to which system. In Volume Manager disks are grouped into disk groups and as a group can be assigned for access from one of the systems. The assignment can be changed quickly allowing for cluster fail/switch-over. Disks that compose disk group can be scattered across multiple disk enclosures (packs, arrays) and SCSI buses. We used this feature to create disk groups that contains VM volumes mirrored across devices. Below is a schematics of 3 cluster nodes connected by SCSI busses to 4 disk packs (we use Sun Multipacks). The Node 0 is connected to Disk Pack 0 and Node 1 on one SCSI bus and to Disk Pack 1 and Node 1 on second SCSI bus. Disks 0 in Pack 0 and 1 are put into Disk group 0, disks 1 in Pack 0 and 1 are put into Disk group 1 and so on for all the disks in the Packs. We have 4 9 GB disks in each Pack so we have 4 Disk groups between Node 0 and 1 that can be switched from one node to the other. Node 1 is interfacing the the Node 2 in the same way as with the Node 0. Two disk packs Pack 2 and Pack 3 are configured with disk groups 4, 5, 6 and 7 as a shared storage between the nodes. We have a total of 8 disk groups in the cluster. Groups 0-3 can be visible from Node 0 or 1 and groups 4-7 from Node 1 and 2. Node 1 is in a privileged situation and can access all disk groups. Node 0 Node 1 Node 2 ... Node N ------- ------------------- ------ |\ /| |\ /| | \ / | | \ / | | \ / | | \ / | | \ / | | \ / | | \ / | | \ / | | \ / | | \ / | | \ / | | \ / | | \ | | \ | | / \ | | / \ | | / \ | | / \ | | / \ | | / \ | | / \ | | / \ | | / \ | | / \ | | / \ | | / \ | |/ \| |/ \| Disk Pack 0: Disk Pack 1: Disk Pack 2: Disk Pack 3: Disk group 0: Disk group 4: +----------------------+ +------------------------+ | Disk0 Disk0 | | Disk0 Disk0 | +----------------------+ +------------------------+ Disk group 1: Disk group 5: +----------------------+ +------------------------+ | Disk1 Disk1 | | Disk1 Disk1 | +----------------------+ +------------------------+ Disk group 2: Disk group 6: +----------------------+ +------------------------+ | Disk2 Disk2 | | Disk2 Disk2 | +----------------------+ +------------------------+ Disk group 3: Disk group 7: +----------------------+ +------------------------+ | Disk3 Disk3 | | Disk3 Disk3 | +----------------------+ +------------------------+ - Hardware details: Below is a detailed listing of the hardware configuration of two nodes. Sun part numbers are included so you can order it directly form Sunstore and put it on your Visa: - E250: + Base: A26-AA + 2xCPU: X1194A + 2x256MB RAM: X7004A, + 4xUltraSCSI 9.1GB hard drive: X5234A + 100BaseT Fast/Wide UltraSCSI PCI adapter: X1032A + Quad Fastethernet controller PCI adapter: X1034A - MultiPack: + 4x9.1GB 10000RPM disk + Storedge Mulitpack: SG-XDSK040C-36G - Connections: + SCSI: E250: E250: X1032A-------SCSI----->Multipack<----SCSI---X1032A X1032A-------SCSI----->Multipack<----SCSI---X1032A + VCS private LAN 0: hme0----------Ethernet--->HUB<---Ethernet---hme0 + VCS private LAN 1: X1034A(qfe0)--Ethernet--->HUB<---Ethernet---X1034A(qfe0) + Cluster private LAN: X1034A(qfe1)--Ethernet--->HUB<---Ethernet---X1034A(qfe1) + Public LAN: X1034A(qfe2)--Ethernet--->HUB<---Ethernet---X1034A(qfe2) Installation of VCS-1.1.2 ---------------------------- Two systems are put into the cluster: foo_c and bar_c - Set scsi-initiator-id boot prom envrionment variable to 5 on one of the systems (say bar_c): ok setenv scsi-initiator-id 5 ok boot -r - Install Veritas Foundation Suite 3.0.1. Follow Veritas manuals. - Add entries to your c-shell environment: set veritas = /opt/VRTSvmsa setenv VMSAHOME $veritas setenv MANPATH ${MANPATH}:$veritas/man set path = ( $path $veritas/bin ) - Configure the ethernet connections to use hme0 and qfe0 as Cluster private interconnects. Do not create /etc/hostname.{hme0,qfe0}. Configure qfe2 as the public LAN network and qfe1 as Cluster main private network. The configuration files on foo_c: /etc/hosts: 127.0.0.1 localhost # public network (192.168.0.0/16): 192.168.1.40 bar 192.168.1.51 foo # Cluster private network (network address 10.2.0.0/16): 10.2.0.1 bar_c 10.2.0.3 foo_c loghost /etc/hostname.qfe1: foo_c /etc/hostname.qfe2: foo The configuration files on bar_c: /etc/hosts: 127.0.0.1 localhost # Public network (192.168.0.0/16): 192.168.1.40 bar 192.168.1.51 foo # Cluster private network (network address 10.2.0.0/16): 10.2.0.1 bar_c loghost 10.2.0.3 foo_c /etc/hostname.qfe1: bar_c /etc/hostname.qfe2: bar - Configure at least two VM diskgroups on shared storage (Multipacks) working from on one of the systems (e.g. foo_c): + Create cluster volume groups spanning both multipacks using vxdiskadm '1. Add or initialize one or more disks': cluster1: c1t1d0 c2t1d0 cluster2: c1t2d0 c2t2d0 ... Name vmdisks like that: cluster1: cluster101 cluster102 cluster2: cluster201 cluster202 ... You can do it for 4 disk groups with this script: #!/bin/sh for group in 1 2 3 4;do vxdisksetup -i c1t${group}d0 vxdisksetup -i c2t${group}d0 vxdg init cluster${group} cluster${group}01=c1t${group}d0 vxdg -g cluster${group} adddisk cluster${group}02=c2t${group}d0 done + Create volumes in each group mirrored across both multipacks. You can do it with the script for 4 disk groups with this script: #!/bin/sh for group in 1 2 3 4;do vxassist -b -g cluster${group} make vol01 8g layout=mirror cluster${group}01 cluster${group}02 done + or do all diskgroup and volumes in one script: #!/bin/sh for group in 1 2 3 4;do vxdisksetup -i c1t${group}d0 vxdisksetup -i c2t${group}d0 vxdg init cluster${group} cluster${group}01=c1t${group}d0 vxdg -g cluster${group} adddisk cluster${group}02=c2t${group}d0 vxassist -b -g cluster${group} make vol01 8g layout=mirror cluster${group}01 cluster${group}02 done + Create veritas file systems on the volumes: #!/bin/sh for group in 1 2 3 4;do mkfs -F vxfs /dev/vx/rdsk/cluster$group/vol01 done + Deport a group from one system: stop volume, deport a group: # vxvol -g cluster2 stop vol01 # vxdg deport cluster2 + Import a group and start its volume on the other system to see if this works: # vxdg import cluster2 # vxrecover -g cluster2 -sb - With the shared storage configured it is important to know how to manually move the volumes from one node of the cluster to the other. I use a cmount command to do that. It is like a rc scritp with additional argument for the disk group. To stop (deport) the group 1 on a node do: # cmount 1 stop To start (import) the group 1 on the other node do: # cmount 1 start The cmount script is as follows: #!/bin/sh set -x group=$1 case $2 in start) vxdg import cluster$group vxrecover -g cluster$group -sb mount -F vxfs /dev/vx/dsk/cluster$group/vol01 /cluster$group ;; stop) umount /cluster$group vxvol -g cluster$group stop vol01 vxdg deport cluster$group ;; esac - To remove all shared storage volumes and groups do: #!/bin/sh for group in 1 2 3 4; do vxvol -g cluster$group stop vol01 vxdg destroy cluster$group done - Install VCS software: (from install server on athena) # cd /net/athena/export/arch/VCS-1.1.2/vcs_1_1_2a_solaris # pkgadd -d . VRTScsga VRTSgab VRTSllt VRTSperl VRTSvcs VRTSvcswz clsp + correct /etc/rc?.d scripts to be links: If they are not symbolic links then it is hard to disable VCS startup at boot. If they are just rename /etc/init.d/vcs to stop starting and stopping at boot. cd /etc rm rc0.d/K10vcs rc3.d/S99vcs cd rc0.d ln -s ../init.d/vcs K10vcs cd ../rc3.d ln -s ../init.d/vcs S99vcs + add -evacuate option to /etc/init.d/vcs: This is optional but I find it important to switch-over all service groups from the node that is being shutdown. When I take a cluster node down I expect the rest of the cluster to pick up the responsibility to run all services. The default VCS does not do that. The only way to move a group from one node to another is to crash it or do manual switch-over using hagrp command. 'stop') $HASTOP -local -evacuate > /dev/null 2>&1 ;; - Add entry to your c-shell environment: set vcs = /opt/VRTSvcs setenv MANPATH ${MANPATH}:$vcs/man set path = ( $vcs/bin $path ) - To remove the VCS software: NOTE: required if demo installation fails. # sh /opt/VRTSvcs/wizards/config/quick_start -b # rsh bar_c 'sh /opt/VRTSvcs/wizards/config/quick_start -b' # pkgrm VRTScsga VRTSgab VRTSllt VRTSperl VRTSvcs VRTSvcswz clsp # rm -rf /etc/VRTSvcs /var/VRTSvcs # init 6 - Configure /.rhosts on both nodes to allow each node transparent rsh root access to the other: /.rhosts: foo_c bar_c - Run quick start script from one of the nodes: NOTE: must run from /usr/openwin/bin/xterm - other xterms cause terminal emulation problems # /usr/openwin/bin/xterm & # sh /opt/VRTSvcs/wizards/config/quick_start Select hme0 and qfe0 network links for GAB and LLT connections. The script will ask twice for the links interface names. Link 1 is hme0 and link2 is qfe0 for both foo_c and bar_c nodes. You should see the heartbeat pings on the interconnection hubs. The wizard creates LLT and GAB configuration files in /etc/llttab, /etc/gabtab and llthosts on each system: On foo_c: /etc/llttab: set-node foo_c link hme0 /dev/hme:0 link qfe1 /dev/qfe:1 start On bar_c: /etc/llttab: set-node bar_c link hme0 /dev/hme:0 link qfe1 /dev/qfe:1 start /etc/gabtab: /sbin/gabconfig -c -n2 /etc/llthosts: 0 foo_c 1 bar_c The LLT and GAB communication is started by rc scripts S70llt and S92gab installed in /etc/rc2.d. - We can configure private interconnect by hand creating above files. - Check basic installation: + status of the gab: # gabconfig -a GAB Port Memberships =============================================================== Port a gen 1e4c0001 membership 01 Port h gen dd080001 membership 01 + status of the link: # lltstat -n LLT node information: Node State Links * 0 foo_c OPEN 2 1 bar_c OPEN 2 + node parameters: # hasys -display - Set/update VCS super user password: + add root user: # haconf -makerw # hauser -add root password:... # haconf -dump -makero + change root password: # haconf -makerw # hauser -update root password:... # haconf -dump -makero - Configure demo NFS service groups: NOTE: You have to fix the VCS wizards first: The wizard perl scripts have a bug that makes the core dump in the middle of filling out configuration forms. The solution is to provide shell wrapper for one binary and avoid running it with specific set of parameters. Do the following in VCS-1.1.2 : # cd /opt/VRTSvcs/bin # mkdir tmp # mv iou tmp # cat << EOF > iou #!/bin/sh echo "[$@]" >> /tmp/,.iou.log case "$@" in '-c 20 9 -g 2 2 3 -l 0 3') echo "skip bug" >> /tmp/,.iou.log ;; *) /opt/VRTSvcs/bin/tmp/iou "$@" ;; esac EOF # chmod 755 iou + Create NFS mount point directories on both systems: # mkdir /export1 /export2 + Run the wizard on foo_c node: NOTE: must run from /usr/openwin/bin/xterm - other xterms cause terminal emulation problems # /usr/openwin/bin/xterm & # sh /opt/VRTSvcs/wizards/services/quick_nfs Select for groupx: - public network device: qfe2 - group name: groupx - IP: 192.168.1.53 - VM disk group: cluster1 - volume: vol01 - mount point: /export1 - options: rw - file system: vxfs Select for groupy: - public network device: qfe2 - group name: groupy - IP: 192.168.1.54 - VM disk group: cluster2 - volume: vol01 - mount point: /export2 - options: rw - file system: vxfs You should see: Congratulations!... The /etc/VRTSvcs/conf/config directory should have main.cf and types.cf files configured. + Reboot both systems: # init 6 Summary of cluster queries: ---------------------------- - Cluster queries: + list cluster status summary: # hastatus -summary -- SYSTEM STATE -- System State Frozen A foo_c RUNNING 0 A bar_c RUNNING 0 -- GROUP STATE -- Group System Probed AutoDisabled State B groupx foo_c Y N ONLINE B groupx bar_c Y N OFFLINE B groupy foo_c Y N OFFLINE B groupy bar_c Y N ONLINE + list cluster attributes: # haclus -display #Attribute Value ClusterName my_vcs CompareRSM 0 CounterInterval 5 DumpingMembership 0 Factor runque 5 memory 1 disk 10 cpu 25 network 5 GlobalCounter 16862 GroupLimit 200 LinkMonitoring 0 LoadSampling 0 LogSize 33554432 MajorVersion 1 MaxFactor runque 100 memory 10 disk 100 cpu 100 network 100 MinorVersion 10 PrintMsg 0 ReadOnly 1 ResourceLimit 5000 SourceFile ./main.cf TypeLimit 100 UserNames root cDgqS68RlRP4k - Resource queries: + list resources: # hares -list cluster1 foo_c cluster1 bar_c IP_192_168_1_53 foo_c IP_192_168_1_53 bar_c ... + list resource dependencies: # hares -dep #Group Parent Child groupx IP_192_168_1_53 groupx_qfe1 groupx IP_192_168_1_53 nfs_export1 groupx export1 cluster1_vol01 groupx nfs_export1 NFS_groupx_16 groupx nfs_export1 export1 groupx cluster1_vol01 cluster1 groupy IP_192_168_1_54 groupy_qfe1 groupy IP_192_168_1_54 nfs_export2 groupy export2 cluster2_vol01 groupy nfs_export2 NFS_groupy_16 groupy nfs_export2 export2 groupy cluster2_v cluster2 + list attributes of a resource: # hares -display export1 #Resource Attribute System Value export1 ConfidenceLevel foo_c 100 export1 ConfidenceLevel bar_c 0 export1 Probed foo_c 1 export1 Probed bar_c 1 export1 State foo_c ONLINE export1 State bar_c OFFLINE export1 ArgListValues foo_c /export1 /dev/vx/dsk/cluster1/vol01 vxfs rw "" ... - Groups queries: + list groups: # hagrp -list groupx foo_c groupx bar_c groupy foo_c groupy bar_c + list group resources: # hagrp -resources groupx cluster1 IP_192_168_1_53 export1 NFS_groupx_16 groupx_qfe1 nfs_export1 cluster1_vol01 + list group dependencies: # hagrp -dep groupx + list of group attributes: # hagrp -display groupx #Group Attribute System Value groupx AutoFailOver global 1 groupx AutoStart global 1 groupx AutoStartList global foo_c groupx FailOverPolicy global Priority groupx Frozen global 0 groupx IntentOnline global 1 groupx ManualOps global 1 groupx OnlineRetryInterval global 0 groupx OnlineRetryLimit global 0 groupx Parallel global 0 groupx PreOnline global 0 groupx PrintTree global 1 groupx SourceFile global ./main.cf groupx SystemList global foo_c 0 bar_c 1 groupx SystemZones global groupx TFrozen global 0 groupx TriggerEvent global 1 groupx UserIntGlobal global 0 groupx UserStrGlobal global groupx AutoDisabled foo_c 0 groupx AutoDisabled bar_c 0 groupx Enabled foo_c 1 groupx Enabled bar_c 1 groupx ProbesPending foo_c 0 groupx ProbesPending bar_c 0 groupx State foo_c |ONLINE| groupx State bar_c |OFFLINE| groupx UserIntLocal foo_c 0 groupx UserIntLocal bar_c 0 groupx UserStrLocal foo_c groupx UserStrLocal bar_c - Node queries: + list nodes in the cluster: # hasys -list foo_c bar_c + list node attributes: # hasys -display bar_c #System Attribute Value bar_c AgentsStopped 1 bar_c ConfigBlockCount 54 bar_c ConfigCheckSum 48400 bar_c ConfigDiskState CURRENT bar_c ConfigFile /etc/VRTSvcs/conf/config bar_c ConfigInfoCnt 0 bar_c ConfigModDate Wed Mar 29 13:46:19 2000 bar_c DiskHbDown bar_c Frozen 0 bar_c GUIIPAddr bar_c LinkHbDown bar_c Load 0 bar_c LoadRaw runque 0 memory 0 disk 0 cpu 0 network 0 bar_c MajorVersion 1 bar_c MinorVersion 10 bar_c NodeId 1 bar_c OnGrpCnt 1 bar_c SourceFile ./main.cf bar_c SysName bar_c bar_c SysState RUNNING bar_c TFrozen 0 bar_c UserInt 0 bar_c UserStr - Resource types queries: + list resource types: # hatype -list CLARiiON Disk DiskGroup ElifNone FileNone FileOnOff FileOnOnly IP IPMultiNIC Mount MultiNICA NFS NIC Phantom Process Proxy ServiceGroupHB Share Volume + list all resources of a given type: # hatype -resources DiskGroup cluster1 cluster2 + list attributes of the given type: # hatype -display IP #Type Attribute Value IP AgentFailedOn IP AgentReplyTimeout 130 IP AgentStartTimeout 60 IP ArgList Device Address NetMask Options ArpDelay IfconfigTwice IP AttrChangedTimeout 60 IP CleanTimeout 60 IP CloseTimeout 60 IP ConfInterval 600 IP LogLevel error IP MonitorIfOffline 1 IP MonitorInterval 60 IP MonitorTimeout 60 IP NameRule IP_ + resource.Address IP NumThreads 10 IP OfflineTimeout 300 IP OnlineRetryLimit 0 IP OnlineTimeout 300 IP OnlineWaitLimit 2 IP OpenTimeout 60 IP Operations OnOff IP RestartLimit 0 IP SourceFile ./types.cf IP ToleranceLimit 0 - Agents queries: + list agents: # haagent -list CLARiiON Disk DiskGroup ElifNone FileNone FileOnOff FileOnOnly IP IPMultiNIC Mount MultiNICA NFS NIC Phantom Process Proxy ServiceGroupHB Share Volume + list status of an agent: # haagent -display IP #Agent Attribute Value IP AgentFile IP Faults 0 IP Running Yes IP Started Yes Summary of basic cluster operations: ------------------------------------ - Cluster Start/Stop: + stop VCS on all systems: # hastop -all + stop VCS on bar_c and move all groups out: # hastop -sys bar_c -evacuate + start VCS on local system: # hastart - Users: + add gui root user: # haconf -makerw # hauser -add root # haconf -dump -makero - Group: + group start, stop: # hagrp -offline groupx -sys foo_c # hagrp -online groupx -sys foo_c + switch a group to other system: # hagrp -switch groupx -to bar_c + freeze a group: # hagrp -freeze groupx + unfreeze a group: # hagrp -unfreeze groupx + enable a group: # hagrp -enable groupx + disable a group: # hagrp -disable groupx + enable resources a group: # hagrp -enableresources groupx + disable resources a group: # hagrp -disableresources groupx + flush a group: # hagrp -flush groupx -sys bar_c - Node: + feeze node: # hasys -freeze bar_c + thaw node: # hasys -unfreeze bar_c - Resources: + online a resouce: # hares -online IP_192_168_1_54 -sys bar_c + offline a resouce: # hares -offline IP_192_168_1_54 -sys bar_c + offline a resouce and propagte to children: # hares -offprop IP_192_168_1_54 -sys bar_c + probe a resouce: # hares -probe IP_192_168_1_54 -sys bar_c + clear faulted resource: # hares -clear IP_192_168_1_54 -sys bar_c - Agents: + start agent: # haagent -start IP -sys bar_c + stop agent: # haagent -stop IP -sys bar_c - Reboot a node with evacuation of all service groups: (groupy is running on bar_c) # hastop -sys bar_c -evacuate # init 6 # hagrp -switch groupy -to bar_c Changing cluster configuration: -------------------------------- You cannot edit configuration files directly while the cluster is running. This can be done only if cluster is down. The configuration files are in: /etc/VRTSvcs/conf/config To change the configuartion you can: + use hagui + stop the cluster (hastop), edit main.cf and types.cf directly, regenerate main.cmd (hacf -generate .) and start the cluster (hastart) + use the following command line based procedure on running cluster To change the cluster while it is running do this: - Dump current cluster configuration to files and generate main.cmd file: # haconf -dump # hacf -generate . # hacf -verify . - Create new configuration directory: # mkdir -p ../new - Copy existing *.cf files in there: # cp main.cf types.cf ../new - Add new stuff to it: # vi main.cf types.cf - Regenerate the main.cmd file with low level commands: # cd ../new # hacf -generate . # hacf -verify . - Catch the diffs: # diff ../config/main.cmd main.cmd > ,.cmd - Prepend this to the top of the file to make config rw: # haconf -makerw - Append the command to make configuration ro: # haconf -dump -makero - Apply the diffs you need: # sh -x ,.cmd Configuration of a test group and test resource type: ------------------------------------------------------ To get comfortable with the cluster configuration it is useful to create your own group that uses your own resource. Example below demonstrates configuration of a "do nothing" group with one resource of our own type. - Add group test with one resource test. Add this to /etc/VRTSvcs/conf/config/new/types.cf: type Test ( str Tester NameRule = resource.Name int IntAttr str StringAttr str VectorAttr[] str AssocAttr{} static str ArgList[] = { IntAttr, StringAttr, VectorAttr, AssocAttr } ) - Add this to /etc/VRTSvcs/conf/config/new/main.cf: group test ( SystemList = { foo_c, bar_c } AutoStartList = { foo_c } ) Test test ( IntAttr = 100 StringAttr = "Testing 1 2 3" VectorAttr = { one, two, three } AssocAttr = { one = 1, two = 2 } ) - Run the hacf -generate and diff as above. Edit it to get ,.cmd file: haconf -makerw hatype -add Test hatype -modify Test SourceFile "./types.cf" haattr -add Test Tester -string hatype -modify Test NameRule "resource.Name" haattr -add Test IntAttr -integer haattr -add Test StringAttr -string haattr -add Test VectorAttr -string -vector haattr -add Test AssocAttr -string -assoc hatype -modify Test ArgList IntAttr StringAttr VectorAttr AssocAttr hatype -modify Test LogLevel error hatype -modify Test MonitorIfOffline 1 hatype -modify Test AttrChangedTimeout 60 hatype -modify Test CloseTimeout 60 hatype -modify Test CleanTimeout 60 hatype -modify Test ConfInterval 600 hatype -modify Test MonitorInterval 60 hatype -modify Test MonitorTimeout 60 hatype -modify Test NumThreads 10 hatype -modify Test OfflineTimeout 300 hatype -modify Test OnlineRetryLimit 0 hatype -modify Test OnlineTimeout 300 hatype -modify Test OnlineWaitLimit 2 hatype -modify Test OpenTimeout 60 hatype -modify Test RestartLimit 0 hatype -modify Test ToleranceLimit 0 hatype -modify Test AgentStartTimeout 60 hatype -modify Test AgentReplyTimeout 130 hatype -modify Test Operations OnOff haattr -default Test AutoStart 1 haattr -default Test Critical 1 haattr -default Test Enabled 1 haattr -default Test TriggerEvent 0 hagrp -add test hagrp -modify test SystemList foo_c 0 bar_c 1 hagrp -modify test AutoStartList foo_c hagrp -modify test SourceFile "./main.cf" hares -add test Test test hares -modify test Enabled 1 hares -modify test IntAttr 100 hares -modify test StringAttr "Testing 1 2 3" hares -modify test VectorAttr one two three hares -modify test AssocAttr one 1 two 2 haconf -dump -makero - Feed it to sh: # sh -x ,.cmd - Both group test and resource Test should be added to the cluster Installation of a test agent for a test resource: ------------------------------------------------- This agent does not start or monitor any specific resource. It just maintains its persistent state in ,.on file. This can be used as a template for other agents that perform some real work. - in /opt/VRTSvcs/bin create Test directory # cd /opt/VRTSvcs/bin # mkdir Test - link in the precompiled agent binary for script implemented methods: # cd Test # ln -s ../ScriptAgent TestAgent - create dummy agent scripts in /opt/VRTSvcs/bin/Test: (make then executable - chmod 755 ...) online: #!/bin/sh echo "`date` $0 $@" >> /opt/VRTSvcs/bin/Test/log echo yes > /opt/VRTSvcs/bin/Test/,.on offline: #!/bin/sh echo "`date` $0 $@" >> /opt/VRTSvcs/bin/Test/log echo no > /opt/VRTSvcs/bin/Test/,.on open: #!/bin/sh echo "`date` $0 $@" >> /opt/VRTSvcs/bin/Test/log close: #!/bin/sh echo "`date` $0 $@" >> /opt/VRTSvcs/bin/Test/log shutdown: #!/bin/sh echo "`date` $0 $@" >> /opt/VRTSvcs/bin/Test/log clean: #!/bin/sh echo "`date` $0 $@" >> /opt/VRTSvcs/bin/Test/log monitor: #!/bin/sh echo "`date` $0 $@" >> /opt/VRTSvcs/bin/Test/log case "`cat /opt/VRTSvcs/bin/Test/,.on`" in no) exit 100 ;; *) exit 101 ;; esac - start the agent: # haagent -start Test -sys foo_c - distribute the agent code to other nodes: # cd /opt/VRTSvcs/bin/ # rsync -av --rsync-path=/opt/pub/bin/rsync Test bar_cs/bin - start test group: # hagrp -online test -sys foo_c Note: Distribution or synchronization of the agent code is very important for cluster intergrity. If the agents differ on various cluster nodes unpredictible things can happen. I maintain a shell script in the veritas agent directory (/opt/VRTSvcs/bin) to distribute code of all agents I work on: #!/bin/sh set -x mkdir -p /tmp/vcs for dest in hades_c:/opt/VRTSvcs/bin /tmp/vcs;do rsync -av --rsync-path=/opt/pub/bin/rsync --exclude=log --exclude=,.on ,.sync CCViews CCVOBReg CCVOBMount ClearCase Test CCRegistry NISMaster NISClient $dest done cd /tmp tar cvf vcs.tar vcs Home directories service group configuration: --------------------------------------------- We configure home directories to be a service group consisting of an IP address and the directory containing all home directories. Users can consistently connect (telnet, rsh, etc.) to the logical IP and expect to find thier home directories local on the system. The directory that we use is the source directory for the automounter that mounts all directories as needed on the /home subdirectoies. We put directories on the /cluster3 file system and mount it with /etc/auto_home: * localhost:/cluster3/& We assume that all required user accounts are configured on all cluster nodes. This can be done by hand rdisting the /etc/passwd and group files or by using NIS. We used both methods and NIS one is described below. All resources of the group are standard VCS supplied ones so we do not have to implement any agent code for additional resources. Group 'homes' has the following resource (types in brackets): homes: IP_192_168_1_55 (IP) | | v v mount_homes (Mount) qfe1_homes (NIC) | v volume_homes (Volume) | v dgroup_homes (DiskGroup) The service group definition for this group is as follows (main.cf): group homes ( SystemList = { bar_c, foo_c } AutoStartList = { bar_c } ) DiskGroup dgroup_homes ( DiskGroup = cluster3 ) IP IP_192_168_1_55 ( Device = qfe2 Address = "192.168.1.55" ) Mount mount_homes ( MountPoint = "/cluster3" BlockDevice = "/dev/vx/dsk/cluster3/vol01" FSType = vxfs MountOpt = rw ) NIC qfe2_homes ( Device = qfe2 NetworkType = ether ) Volume volume_homes ( Volume = vol01 DiskGroup = cluster3 ) IP_192_168_1_55 requires mount_homes IP_192_168_1_55 requires qfe2_homes mount_homes requires volume_homes volume_homes requires dgroup_homes NIS service group configuration: ---------------------------- NIS is configured as two service groups: one for the NIS Master server and the other for the NIS clients. The server is configured to store all NIS source data files on the shared storage in /cluster1/yp directory. We copied the follwing files to /cluster1/yp: auto_home ethers mail.aliases netmasks protocols services auto_master group netgroup networks publickey timezone bootparams hosts netid passwd rpc The makefile in /var/yp required some changes to reflect different then defalt /etc location of source files. Also the use of sendmail to generate new aliases while the NIS service was in the process of starting up was hanging and we had to remove it from the stardart map generatetion. The limitation here is that the new mail aliases can only be added when the NIS is completely running. The follwing diffs have been applied to /var/yp/Makefile: *** Makefile- Sun May 14 23:33:33 2000 --- Makefile.var.yp Fri May 5 07:38:02 2000 *************** *** 13,19 **** # resolver for hosts not in the current domain. #B=-b B= ! DIR =/etc # # If the passwd, shadow and/or adjunct files used by rpc.yppasswdd # live in directory other than /etc then you'll need to change the --- 13,19 ---- # resolver for hosts not in the current domain. #B=-b B= ! DIR =/cluster1/yp # # If the passwd, shadow and/or adjunct files used by rpc.yppasswdd # live in directory other than /etc then you'll need to change the *************** *** 21,30 **** # DO NOT indent the line, however, since /etc/init.d/yp attempts # to find it with grep "^PWDIR" ... # ! PWDIR =/etc DOM = `domainname` NOPUSH = "" ! ALIASES = /etc/mail/aliases YPDIR=/usr/lib/netsvc/yp SBINDIR=/usr/sbin YPDBDIR=/var/yp --- 21,30 ---- # DO NOT indent the line, however, since /etc/init.d/yp attempts # to find it with grep "^PWDIR" ... # ! PWDIR =/cluster1/yp DOM = `domainname` NOPUSH = "" ! ALIASES = /cluster1/yp/mail.aliases YPDIR=/usr/lib/netsvc/yp SBINDIR=/usr/sbin YPDBDIR=/var/yp *************** *** 45,51 **** else $(MAKE) $(MFLAGS) -k all NOPUSH=$(NOPUSH);fi all: passwd group hosts ethers networks rpc services protocols \ ! netgroup bootparams aliases publickey netid netmasks c2secure \ timezone auto.master auto.home c2secure: --- 45,51 ---- else $(MAKE) $(MFLAGS) -k all NOPUSH=$(NOPUSH);fi all: passwd group hosts ethers networks rpc services protocols \ ! netgroup bootparams publickey netid netmasks \ timezone auto.master auto.home c2secure: *************** *** 187,193 **** @cp $(ALIASES) $(YPDBDIR)/$(DOM)/mail.aliases; @/usr/lib/sendmail -bi -oA$(YPDBDIR)/$(DOM)/mail.aliases; $(MKALIAS) $(YPDBDIR)/$(DOM)/mail.aliases $(YPDBDIR)/$(DOM)/mail.byaddr; - @rm $(YPDBDIR)/$(DOM)/mail.aliases; @touch aliases.time; @echo "updated aliases"; @if [ ! $(NOPUSH) ]; then $(YPPUSH) -d $(DOM) mail.aliases; fi --- 187,192 ---- We need only one master server so only one instance of this service group is allowed on the cluster (group is not parallel). Group 'nis_master' has the following resources (types in brackets): nis_master: master_NIS (NISMaster) | v mount_NIS (Mount) | v volume_NIS (Volume) | v dgroup_NIS (DiskGroup) The client service group is designed to configure domain name on the node and then start ypbind in a broadcast mode. We need NIS client to run on every node so it is designed as parallel group. Clients cannot function without Master server running somewhere on the cluster network so we include dependency between client and master service groups as 'online global'. The client group unconfigures NIS completely from the node when it is shotdown. This may seem radical but it is required for consistency with the startup. To allow master group to come on line we also include in this group automatic configuration of the domain name. The nis_master group is defined as follows (main.cf): group nis_master ( SystemList = { bar_c, foo_c } AutoStartList = { bar_c } ) DiskGroup dgroup_NIS ( DiskGroup = cluster1 ) Mount mount_NIS ( MountPoint = "/cluster1" BlockDevice = "/dev/vx/dsk/cluster1/vol01" FSType = vxfs MountOpt = rw ) NISMaster master_NIS ( Source = "/cluster1/yp" Domain = mydomain ) Volume volume_NIS ( Volume = vol01 DiskGroup = cluster1 ) master_NIS requires mount_NIS mount_NIS requires volume_NIS volume_NIS requires dgroup_NIS Group 'nis_client' has the following resource (types in brackets): nis_client: client_NIS (NISClient) The nis_client group is defined as follows (main.cf): group nis_client ( SystemList = { bar_c, foo_c } Parallel = 1 AutoStartList = { bar_c, foo_c } ) NISClient client_NIS ( Domain = mydomain ) requires group nis_master online global Both master and client service group use custom built resource and correspnding agent code. The resource are defined as follows (in types.cf): type NISClient ( static str ArgList[] = { Domain } NameRule = resource.Name str Domain ) type NISMaster ( static str ArgList[] = { Source, Domain } NameRule = resource.Name str Source str Domain ) The agent code for NISMaster and NISClient NISMaster/ NISMaster/NISMasterAgent NISMaster/monitor NISMaster/offline NISMaster/online NISMaster/open NISMaster/shutdown NISClient/ NISClient/NISClientAgent NISClient/monitor NISClient/offline NISClient/online NISClient/open NISClient/shutdown follows as a shar archive: #!/bin/sh sed 's/^X//' << 'SHAR_EOF' | uudecode begin 600 vcs-nis.tar.gz M'XL("!^?'SD``W9CM,[22FX9?`L@2*9Y#'GV' MI#Y=_GCUTF>87BNB:6F85A' MX'3G4HLEXWX.<)3]1??4(SGKPZ%^<=G$O_GU?$82_C%MF);A>=Z]XF\['OXV M77P"CL`:#O6K(*<9_^@NM9#QK^+_]3#OR`:R^;W\Q\B7\7>=D6EY@O\CSSD" MHR-_;N&1Q__T1)_21&=SA1$.@[6B#'7-A/4:]"!>B@?#U#<97!/&18,T![IL+%!6AX MOT+6)*A/P+IX8C9IY12N"`$:P2;#D5Q!EJ MS7_;]&S!?]OV)/_[0&_\9W,:<86ERSP@8\VLSL-TX=-$G-<<+J_L\+>DI2"E MR",\S<"/8T#?0%`:`P-D'9",(\?1Z?!<"7V"\QD;JYML'>4AY%DPW-3Y`I,( MVV2AVM;2WZB3LFM:=4VD&)Z#*OZP4'VCJPKFE9@ROC\W)?[--9QI%OP+:_@; M4QU-.."JZI\S6(];$TW>JAI6X:3J-93=OJ;8R[JTZ/U'C_\6_Y..Z+^/_Y91 MS_^X_[?=@O\C0_*_%SPX_O,Y9>/)/&5D M<5AE$Y%]%GX&-Y3/@=%D%A,@"4?+49J#<+=*4N?5IJFX=`$Z7V2XZ=8T8:QN M>^%?DW"Z:`M;(TJ^@$&T=5OCA\B%Z`$[WW:[K/W5D-,%*9J%RU<__WKUPWC3 M[.&VAZZT#P%N`J(91%1T8HU9!P1V]9DB+'E>-8Y]#LKI=53DNCR*8S2B4>7[7MV(@>>&/..XZ,VD;FA2)5-MU MYIV67A.1!'CU_.!8B@>^F5IF:;M"+3)_3*=Z0CA;!3C0^,]P0GI7"39;6HAR MPN:0I8S1:;R!G$R7V`,_ICXC&+E3^$*$)0G]O(FA_A+#5H0BF/O)C(2BM\!3 MR)<)>I>$V(6X&(_Z.LXK*X)MB8)AU?:7K6O-/8,IQ5GK>6VG&@U]^[8.YAJ) M3P];\W]&DFYL[%O_6^*=?[G^-PU3O/\UO9&<_WO!@YO_#SU@GQG:^++Y$N?D MFPYRP-[W_XY9K_]'EB7T/\N5^_]^T!O_#]U1B3N!\7T14]RO?#+ZOR,$0<\S MI?[?!]KX-[\^!?W?<%RI__>!-OZ'T__!=>OUOVO:;J'_B_C+^;][O%O_EY+_ M8T#+_P/J_Y;=\-\J]O^6[1J2_WW@'NO_ZOGXP/5_JH_ER\+$UH,GM;ZS^%USM%I>[_J+);&]_#Z?^V9]3\=W#A7_#?DOO_7M`; M_S]0[Z^%_7[2R,DM;TH9OBW_G#X.VN+_H?1_7/ZW^W_<^`G^6\Y(\K\/'([_ MM`S#D\Z69ML\?]@^I]K-]__C3RKU/]L6_*_#SR`^5]^H]+A MEJ2-[^'T/\>P6_W/'I7ZG]S_]X+>^']O'IORO!)T0Q]`MJV^8 MI@5/#5.W'Y%^91258,%2-R'D4?RKOZ<<35@=!-4+YP7_B\MOIS1,[Z\-X*;C M.`?PW]%[?0>N#;MOF8^(V>ETWWJ)'Z?W3%`9%/\S_GL!=<,JVC#TV_7?T&W@ M/\J(:=@]U'_3,HQ'1*^"F$UXX/Q_\KA[Z8==-M,TZLTBTGS^_#D9C=V4CAHM MG;3^W"0O7I!N%*?=G]^\>[OT&"]?$IL@FFJG[H6"N\*:_D>,5M'&7OV'R0'M MOPZSOF,Z7/\M7>E_'7`_^J]YX]O*:,FUS_3:/?7X`IBRI$EVJ[T4?W\C*X^T M@Q$4_P_L+?[U33JCH=9X0EY&:]7:)(S(,KID`ZU!/_@IK$1U;>)K6CKSV;`Y MFD4L#=TY'36U./'G;G(]',VH.R9M@W27+@S">"X'(IE>X[\+K,(Z7A1.1MJE MZUTM8C8,0*])9)> MDD9D&I'L^>4U63`_G!)A?@GCNRM`$X6\5DA7!*GI<+RRT,8SPF;1(AB318Q, MY(^R3G"R"FP1E))T=,@_@04$QY7,W#BFH2!SG71@"3Q/Z#Q:TO%`LATFCMND MLL"0L9G7:K:PK29Y/(1+240S+](0$])!>#4H#N/L7HF^"J82EQ$603_=]>&> MN4LH0K&KB1NR"4VP(XU&265HZG$V"D3PS(M)=TR7W7`1!/MXKC5`/-=E-9>! M<$,$_#"%73.91(D<=V2>D(=+ZD7SG.H.X'@+OU-_3AEG)EOY*2@KZL@X@GMA ME)*8)H!J+C@+/]S4Q_Z'8P+$A)2.&:!)$3=P$(BG\$!R7,XL4MPZVP2O,1[0 MY*,+V$4E,J8I]5+L&6A&-K@PWDP,-_0G#J#(>$`.EIOP9K'Y'5(32J&A`:-0 MYQ!I!/EZ`T-VS<G:^R#HG"8G2+OARE5MA'36IY(IDIFQMH-RLO5/G1 MGP/12!WS?P7-F)0H'<@IX`(?#48OM`&>XB]2=0A,,E/(I.^L\Z\I[9V=/1R@?P($.::?7,243 M\A[$INV2]X])&V<_\OY9JT"Z]512&L"800L?W&0*FCR74O,2^)UD(I6;K5,O M>A3D4+)==DZW_;S/ M%LKV/ZS&_.^S_Z9IF)G]=W331OO?-VQE_^L`9?\_5OM_B/EW@P1(NCYZ"LBV M<_ZDY!J`'^O[++ZD;^-]G&8*7.@O>!4&V0:)$<\-MUP*&M"4@CT9P@82MOX2 MI1]F6QOV#6#,=]WBGNBTZ'Z^.19X%G1H-/A.=AS!O"0V2=FC#3Z7.EKNJ20! M1,\%N?1QW,D/FWT;"-2PN;EED[6KI:RA6;3BCJ_2./(6?=&:S_F61KQ,N=DG MY.SLC!/\>I&6:P-/N)^"^T+X=IH(H8$&W7`*&[8DFA?[?D#=RGF*G/J6,7\* M.&`'EXV]Y'>XQFCTI,A=ZZX=[U%NA\/6&*%:8IP?G/[_ZRS^B19B>/O_3M.P>_(?YG[IMJ?S/.J#$_^+R]/F? MIM53^9]U0(G_;+9(Q]'J_M<`Q_._9UI]X']E'ND2*/YG_#]]_J_A6(;%_?]& MSU'KOSK@SNN_7&Q4_N^G#&OZ?ZK\W[+^]\3^SU;^WUI`Z?_#AA(C3Y7_2T#K M9?RGUS/L'H__]/I*_^N`_?&?-47?%?^YUT@/F_F35(M=QH8M0_YB()7NE)9N M\/^G[A0*/=/N,SQ4RE;:S++\`<,CE,!PD(GK!PQP18M87`]XC`-SK##`@81] M,XXTK1&%PQ$NJY,TB@(2L&5T23#UBG5;(B&+T_/TW\^>%OE?(B32BL(-\K;H M*PC$L(B(BVAYUMZ!_"_Q]V3Y/Q;\R/7?$/IO])3^UP'WH/\UQ']W6`52IUDX M)&K\A/S=#P+"Y0330.?+"91(HRNX.QR5QHPMPG[W:CGO]CMV=X+E+[`L:8D_;W/[TF\H'6N'L[PFX\(3^%DEE+ MGZ[85M:K>"@QN`NM&HL'!K9D\4J((N\*QWQ0X%I[OIBCB)9:THI8L5J=W@)E M^W^B_!_B.(7]UVW^_I>EUG_UP"=K_S54]8]K!MAM$0LCM66CR@8LW&7B+_Y?\/Z;)UW^ZJ?2_%E#^GX<-G)%OZ+2ZZ/\1\7_;A+V@ MSM\(L0T5_Z\#0^;\GYG^<]FT\_\7H M(__5_%\]_)[Y'\5&S?Z?-I3TOZ+H[Y'ZS]__-&Q=^7]K`:7_#QMR1E86_3T@ M_M.W\O4_&`#<_YM*_^N!@_Q_F:)_M-X_#5]H*K^R5Y'G;SLJ_+T;CO$(DR#` MR#`C7D*A<_+H&Q$MWN&#*]XQ:Q4=V7BK+&_BY6)^"11'$\3VEDRB19CCEU4' M6HAX+N;NF`Y'WAK:;TC`\J,_NIWE)>MV2\&7[0;$^2=XG)$<'=-/N)M&=#L*N8L3]0+X[D0FU.KVD<) MN7Y7F&N]S_[;EEW8_YXC\G_5_J\6^$SL?Z6!'A$/7X_U5V;W-R-)AUCS$4:: MLMU4/BQ%)1Y,PK(:?X-:V9+FK*_+TO_@B/`Q`RR3_>SI"C,A2P,J#@XB=)W(T9"K''_@] M^,WC?75F+A3VOZKH__[S/_2>6=A_"^-_AN.H\Y]K`67_E?U7]O_CL_]"&%?Y M9FR0\[X,-2CEZ.2&7/??Y0S/]59?\< MY_^W>?Z?T;.4_Z\64/[_APTY(RM[^_]0_9?Y/Z;(_[74]Y]J`:7_#QN`D;B] MJ3+][P[G_^A]W5'Y?W5`QG_YMY)$NSOD_UFZ.O^G%LCX7^'T?P?^6S;F_ZKS M?ZJ'C/]GG0^5M0'K/UOP^X;UGY5]_U,WQ?G_AF&J^%\ML.[)1%'8=&6B^W./ M+_-&1^:I>Z=@'V3Z7V'Z]P'[/XOK?]]V;-OB_I^^H\[_J@7NOO_C8J.V?Y\X M%/I?6?KW$?IOFSU#Y'\;ZOW/6D#I_\.&C)$5IG\??/X#'@"A6Z8Z_Z]&."#_ M(U?TPP[_VDK%KN[8GQK3/G9D8_,"MV5+X_.-5&G<2VWE2I<2I+/$9X[ZP./) MLJQG_(3K/J*V$J(S_E;I:MF7_]MWS$S_^SUQ_I?14]]_J`7N7_\_#WW?>:07 M^D@^V6.]D/CCC_;BZ6&`W'O'I;C;6AC@UL4XF&%M5#\OW^DAR>P^!W/Y7E_Z[__T/GO_+ M_;_]GF.(\W^4_Z<6^-CM_VG-?Y6^\>.M[*')N_=L?\D7F%L;4!A1X[.SQ9MY MMN(S>V`2UD=U?S>VLFU#CB4;'IY86\ZNW5F55RGXQZFL5O]S^U]=^N=Q_C]= MY'_V5/YG+:#\?P\;SM'VG+N,5I@!=G#^%VS^K;[.O_]FF"K_JPXH^)]?W7NR MU1V^_\;?_U;Y7]5#P?_J(H![YW_#R?3?U'M\_V>H\__K@;W[OUP^^'=>]^WM M3MT=!4="P=_J(H#[]-_2C4S_#@`_<2CX6YT'>-_Y#W`WG__[MLW?_S+5_%\+ M'*?_.P^`6`LF#38QW*^'^`DYGU'O"AUE*<60''IAD^PT`/3=>6[HT8#??YWX MK&4TH0;O5@1)?OD/TS$\7;NI#:T`#EPV@E>B11XA6Z M%($ZP+.(B3MU_;#SU?'66!A;Q%W.[9`C!;=@($C;`'0N",-X+@4BF5[COPNL MP3I>%$Y&VJ7K72UB0).Z?D">FX?4F;OH?!V./#>]J3A;)K*P)BS9D#O8>7X% MGKD@F^7N8-[Y)L^]:)+'0PS$0L>:A4,+D9M=LU(EKG! M/<^O7O[XP\OO2E\T*KG;27N:EIWM#4D>1X"_+T&*KM`/[I>B@Y)$01=2*(>Z M2=HN_!(X!/$%W5XL0L#A@A^<<>LX06F@31S#05[L9X+6D(W"A1Q+C9^I(=-B MHF3N!B0W%T+N%O%`D_8%U)=LQ)WYU85PD_/_:VHZKQM*\W]E$8"C]O^.S6_8 MZORG6D#M_Q\V%/RM[@W0??IOVE:N_X9AX?K?MM7ZOQ:X9_W?7KF7EN[;:P0I J