Sun Microsystems, Inc.  Sun System Handbook - ISO 3.4 June 2011 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-1009109.1
Update Date:2009-12-03
Keywords:

Solution Type  Technical Instruction Sure

Solution  1009109.1 :   Sun StorEdge[TM] 9900 series: Cheatsheet for HDS CCI(Raid Manager, HORCM) Configuration File  


Related Items
  • Sun Storage 9970 System
  •  
  • Sun Storage 9910 System
  •  
  • Sun Storage 9960 System
  •  
  • Sun Storage 9980 System
  •  
Related Categories
  • GCS>Sun Microsystems>Storage - Disk>Datacenter Disk
  •  

PreviouslyPublishedAs
212582


Description
A cheatsheet is documented for the ease of implementation and the basic
troubleshooting of the HDS CCI scripts for ShadowImage/TrueCopy on the
Solaris[TM] host.

Steps to Follow
The HDS CCI software on the Solaris server displays ShadowImage/TrueCopy
information and allows us to perform the ShadowImage/TrueCopy operations by
the command line or by a command line batch script file.
Once one establishes the CCI operation script, the ShadowImage/TrueCopy operation
can provide continuous unattended data backup.
The following are six major steps to setup CCI for ShadowImage/TrueCopy
operations after Sun StorEdge[TM] 9900 hardware installation is completed:
1. Install CCI(RAID Manager) software on the Solaris host system.
2. Configure a 9900 LDEV as CCI Command Device and map it to the host port.
3. Create the CCI(HORCM) configuration files(horcm0.conf and horcm1.conf).
4. Configure the Solaris host systems to run the ShadowImage/TrueCopy.
5. Test the HORCM Configuration files and launch the HORCM instance on the
Solaris host system(s).
6. CCI(RAID Manager) pair operation.
****************************************************
Step 1: Install CCI Software on Solaris Host System
****************************************************
1. Insert CCI CDROM on CDROM reader.
2. Install the CCI software.
# cd /opt
# cpio -idmu < /cdrom/cdrom0/SOLARIS/RMHORC
# ln -s /opt/HORCM /HORCM
# cd /HORCM
# ./horcminstall.sh
3. Verify the CCI software version number.
# raidqry -h
Model: RAID-Manager/Sun-Solaris
Ver&Rev: 01-10-03/02
Usage: raidqry [options]
********************************
Step 2: Configure Command Device
********************************
See HDS manual rd1053-3.pdf, section 4.5.2, 01/2003 version for detail.
1. Open [Lun Manager] tab from Storage Navigator.
2. Select [LU Path], double click a port, select a host group.
3. [LU Path] table display information about the host group corresponding
to LU path.
4. Select and right click a LUN that is currently not a command device.
Select [Command Device]: [off -> on] from the pop up menu.
5. Select [Yes] to the confirmation message stating whether you want
to use logical device as command device.
6. Select [Apply] to apply the changes and select [OK] to the confirmation
message.
7. HBA may or may not discover new LUN depending on the SAN release number,
mode 249 and whether HBA is Sun badged or not. If the LUN cannot
be dynamically configured, exit from Storage Navigator. Check up
and edit proper /kernel/drv/sd.conf or /kernel/drv/ssd.conf files.
System reconfiguration reboot with commands "init 0 / boot -r" and device
reconfiguration command "devfsadm -C" are required to see the new LUN.
8. Check out the output from the format command on the disk names.
Command Device(CM) in the format output has "CM" string in the
disk details. For example:
2. c2t2d6<HITACHI-OPEN-V-CM-2108 cyl52 alt2 hd 15 sec 128>
**
/pci@1f,0/pci@1/fibre-channel@2/sd@1.6
9. Write a label to the new logical device which is Command Device
under format command since Solaris platform is required to label
the Command Device.
****************************************************************************
Step 3: Create the CCI(HORCM) configuration files(horcm0.conf and horcm1.conf)
****************************************************************************
The HDS CCI configuration contains four sections - HORCM_MON, HORCM_CMD,
HORCM_DEV and HORCM_INST. It defines the relationship of the connected
hosts, volumes and the groups for the CCI instance.
Both Truecopy and ShadowImage can have more than 1 CCI Host.  In most cases
both the PVOLs and SVOLs are managed from the same host with multiple HORCM
instances, but you can also have the PVOLs managed by one host and the
SVOLs managed by another host.
It is required that the horcm configuration files reside in the /etc
directory.
The following shows the templates of a pair of CCI configuration
files(horcm0.conf and horcm1.conf). Local host uses horcm0.conf
and the remote host uses horcm1.conf. When using two separate hosts to
manage the PVOL and SVOL it is not necessary to name the configuration files
horcm0 and horcm1.  The only case where horcm.conf is not sufficient is
where there are more than 1 instance running on the same host.
------------
horcm0.conf
------------
HORCM_MON
#ip_address     service         poll(10ms)      timeout(10ms)
192.168.1.10    horcm0          1000            3000
HORCM_CMD
#dev_name       dev_name        dev_name
/dev/rdsk/c1t0d6s2
HORCM_DEV
#dev_group      dev_name        port#           TargetID        LU#
teama           pair1           CL1_C           0               4
teama           pair2           CL1_C           0               5
HORCM_INST
#dev_group      ip_address      service
teama           192.168.1.12    horcm1
------------
horcm1.conf
------------
HORCM_MON
#ip_address     service         poll(10ms)      timeout(10ms)
192.168.1.12    horcm1          1000            3000
HORCM_CMD
#dev_name       dev_name        dev_name
/dev/rdsk/c1t0d6s2
HORCM_DEV
#dev_group      dev_name        port#           TargetID        LU#
teama           pair1           CL1_D           0               0
teama           pair2           CL1_D           0               1
HORCM_INST
#dev_group      ip_address      service
teama           192.168.1.10    horcm0
--------------------------------
Definition of HORCM_MON Section:
--------------------------------
1. ip_address: The IP address of the local host.
2. Service: Specifies the UDP service name assigned to the HORCM
communication path, which is registered in /etc/services.
3. Poll: The interval for monitoring paired volumes. Use default value.
4. Timeout: The time-out period of communication with the remote server.
Use default value.
-------------------------------
Definition of HORCM_CMD Section
-------------------------------
The command device must be mapped to the SCSI/fibre using SVP or LUN
Manager remote console software.
1. Dev_name: Specify Solaris device path name. To enable dual pathing
of the command device, include all paths on a single line
like the following:
#dev_name             dev_name              dev_name
/dev/rdsk/c1t0d6s2    /dev/rdsk/c2t0d6s2
The MPXIO device path can also be used as well.
--------------------------------
Definition of HORCM_DEV Section:
--------------------------------
1. dev_group: Names a group of paired logical volumes.
2. dev_name: Names the paired logical volume within a group.
3. port #: Defines the 9900V/9900 port number of the volume that
corresponds with the dev_name volume.
4. Target ID: Defines the SCSI/fibre target ID number of the physical
volume on the specified port.
4. LU #: Defines the SCSI/fibre logical unit number (LU#) of the physical
volume on the specified target ID and port.
5. MU #: Defines the mirror unit number (0 - 2) of ShadowImage volumes. If
this number is omitted it is assumed to be zero (0). TrueCopy
does not have this parameter since it does not use mirroring.
This information is gathered from the raidscan command.  HORCM must be
started in order to run this command:
# raidscan -p CL1-B
PORT# /ALPA/C,TID#,LU#.Num(LDEV#....)...P/S, Status,Fence,LDEV#,P-Seq#,P-LDEV#
CL1-B   /01/ 7,125,  0.1(10)............SMPL  ----  ------ ----, -----  ----
CL1-B   /01/ 7,125,  1.1(128)...........SMPL  ----  ------ ----, -----  ----
CL1-B   /01/ 7,125,  2.1(129)...........SMPL  ----  ------ ----, -----  ----
Important!!! You can face problems in configuration if Storage Navigator info is used instead of raidscan command, please check SRDB 71101 for more details on it.
--------------------------------
Definition of HORCM_INST Section
--------------------------------
1. dev_group: The group name defined in dev_group of HORC_DEV.
2. ip_address: The network address of the specified remote server or local
server if using more than one instance on the same system to manage the
PVOLs and SVOLs.
3. service: The service name of the HORCM instance that will used to manage
the other Volumes managed by another host or another HORCM instance on the
same host.
---------------------------------------
Tips to create HORCM Configuration File
---------------------------------------
1. Use the raw device pathname including slice(s2) in the
HORCM_CMD section.
2. Make sure the port number, LUN numbers and Target ID (TID)
are correct under HORCM_DEV section.
3. Leave one blank line among these four parameter groups for
easy reading and troubleshooting.
4. No blank line in the beginning and the end of the file.
5. No extra space at the end of each line.
6. Use space instead of tab when creating the HORCM configuration files.
7. When copy the HORCM files from Windows host to the Solaris host,
be sure that there is no transparent character in the file,
especially when the HORMC files were created from Windows Note.
8. One must restart the HORCM instance when the HORCM configuration
files are modified.
**************************************************************************
Step 4: Configure the Solaris host systems to run the ShadowImage/TrueCopy
**************************************************************************
1. Identify local hostname and IP address for ShadowImage or
identify both of local and remote hostnames and IP addresses
for TrueCopy.
2. Edit /etc/service file on the host(s) by adding following two lines:
horcm0        11000/udp
horcm1        11001/udp
3. Set up two environmental variables. One is required to run
HORCM commands and  the other identifies whether one is running
ShadowImage or TrueCopy.
# setenv HORCMINST 0   ! for horcm0.conf
or
# setenv HORCMINST 1   ! for horcm1.conf
# setenv HORCC_MRCF 1  ! for ShadowImage
or
# unset HORCC_MRCF     ! for TrueCopy
NOTE:  The HORCC_MRCF environmental variable must be unset for TrueCopy, ie. it can't even be set to zero.
******************************************
Step 5: Test the HORCM Configuration files
******************************************
For local host, use horcm0.conf and for remote host, use horcm1.conf
from above templates. Modify the templates to fit your needs. Below is
the definition summary of each of four sections:
HORCM_MON: IP address of the local system.
HORCM_CMD: Physical drive ID of the Command Device.
HORCM_DEV: Identifies the Port, LUN, TID and S-VOL(s).
HORCM_INST: IP address of the remote system.
1. Verify the HORCM instance:
a. Comment out all of entries for the HORCM_DEV and HORCM_INST
parameter sections with the pound(#) character for BOTH
of horcm0.conf and horcm1.conf files. The HORCM configuration
file shoud be similar to the following:
HORCM_MON
#ip_address     service         poll(10ms)      timeout(10ms)
192.168.1.10    horcm0          1000            3000
HORCM_CMD
#dev_name       dev_name        dev_name
/dev/rdsk/c1t0d6s2
#HORCM_DEV
#dev_group      dev_name        port#           TargetID        LU#
#teama           pair1           CL1_C           0               4
#teama           pair2           CL1_C           0               5
#HORCM_INST
#dev_group      ip_address      service
#teama           192.168.1.12    horcm1
b. Start the HORCM instance in a system terminal.
# horcmstart.sh 0   ! on local system
# horcmstart.sh 1   ! on remote system
If any one of the two HORCM instances failed to start, investigate
the log file directories in the /opt/HORCM for the failure reason.
Console window may indicate a pointer to the specific log file.
One must fix up this problem before moving to next step. Note: that you
can  start multiple instances on the same host by specifying the
instance numbers:
horcmstart.sh 0 1 2
2. Verify the port, LUN number, and TID settings:
a. Use the raidscan -p <port#> -fx from the loca host and the remote
host to confirm the settings of port #, LUN # and TID #. Example
of the output from raidscan is as follows:
PORT# /ALPA/C,TID#,LU#.Num(LDEV#..).P/S, Status,Fence,LEV#,P-Seq#,P-LdEV#
CL1-D / ef/ 0, 0,   0.1(507).......SMPL  ------ ----- ----, ----- ------
CL1-D / ef/ 0, 0,   1.1(508).......SMPL  ------ ----- ----, ----- ------
b. If one sees any wrong port #, LUN # or TID #, first is to stop the
HORCM instance by issuing the following command:
# horcmshutdown.sh 0       ! on local system
# horcmshutdown.sh 1       ! on remote system
Second is to modify horcm[01] files to have correct port #,
LUN# and TID #. Restart the HORCM instance by issuing
horcmstart.sh [01].
3. If steps 1 and 2 are passed, uncomment the lines in the
HORCM_DEV and HORCM_INST sections except for the second line
of each parameter section. The HORCM configuration file should be
similar to the following:
HORCM_MON
#ip_address     service         poll(10ms)      timeout(10ms)
192.168.1.10    horcm0          1000            3000
HORCM_CMD
#dev_name       dev_name        dev_name
/dev/rdsk/c1t0d6s2
HORCM_DEV
#dev_group      dev_name        port#           TargetID        LU#
teama           pair1           CL1_C           0               4
teama           pair2           CL1_C           0               5
HORCM_INST
#dev_group      ip_address      service
teama           192.168.1.12    horcm1
****************************************
Step 6: CCI(RAID Manager) pair operation
****************************************
Below is the output from TrueCopy operation.
1. Write data to P-VOL: for example
# newfs /dev/rdsk/c1t0d4s2
# mkdir /export/home/test
# mount /dev/dsk/c1t0d4s2 /export/home/test
# cp -r /opt /export/home/test
2. Create/Split/Resync Pair:
a. Display status for all the LDEVs managed under teama. Example
is as below.
# pairdisplay -g teama -fx
Group PairVol(L/R)(Port#,TID,LU),Seq#,LDEV#.P/S,Status,Fence,Seq#,P-LDEV#,M
teama Pair1(L)     (CL1-C,0,  4) 20405 503...SMPL ---- -----,---- ------- -
teama Pair1(R)     (CL1-D,0,  0) 20171 507...SMPL ---- -----,---- ------- -
teama Pair2(L)     (CL1-C,0,  5) 20405 504...SMPL ---- -----,---- ------- -
teama Pair2(R)     (CL1-D,0,  1) 20171 508...SMPL ---- -----,---- ------- -
Group:        Dev_group name under horcm[01].conf.
Pair1(L/R):   L means the local host, R means the remote host.
Port#,TID,LU: 9900 port, Target ID, and LUN number.
Seq#:         The serial number of the 9900.
LDEV#:        The CU and LUN number is hex due to using -fx option.
b. Create a pair:
# paircreate -g teama -d pair1 -vl -f never
Issue the pairdisplay command right after paricreate command to see
the copy status.
# pairdisplay -g teama -fx
Group PairVol(L/R)(Port#,TID,LU),Seq#,LDEV#.P/S,Status,Fence,Seq#,P-LDEV#,M
teama Pair1(L)     (CL1-C,0,  4) 20405 503...COPY ---- -----,---- ------- -
teama Pair1(R)     (CL1-D,0,  0) 20171 507...COPY ---- -----,---- ------- -
teama Pair2(L)     (CL1-C,0,  5) 20405 504...COPY ---- -----,---- ------- -
teama Pair2(R)     (CL1-D,0,  1) 20171 508...COPY ---- -----,---- ------- -
The pair is in COPY status.
Issue the paridisplay command after the copy operation is completd.
# pairdisplay -g teama -fx
Group PairVol(L/R)(Port#,TID,LU),Seq#,LDEV#.P/S,Status,Fence,Seq#,P-LDEV#,M
teama Pair1(L)     (CL1-C,0,  4) 20405 503...PAIR ---- -----,---- ------- -
teama Pair1(R)     (CL1-D,0,  0) 20171 507...PAIR ---- -----,---- ------- -
teama Pair2(L)     (CL1-C,0,  5) 20405 504...PAIR ---- -----,---- ------- -
teama Pair2(R)     (CL1-D,0,  1) 20171 508...PAIR ---- -----,---- ------- -
The pair is in PAIR status.
c. Split pair:
# pairsplit -g teama -d pair1
# pairdisplay -g teama -fx
Group PairVol(L/R)(Port#,TID,LU),Seq#,LDEV#.P/S,Status,Fence,Seq#,P-LDEV#,M
teama Pair1(L)     (CL1-C,0,  4) 20405 503...PSUS ---- -----,---- ------- -
teama Pair1(R)     (CL1-D,0,  0) 20171 507...PSUS ---- -----,---- ------- -
teama Pair2(L)     (CL1-C,0,  5) 20405 504...PAIR ---- -----,---- ------- -
teama Pair2(R)     (CL1-D,0,  1) 20171 508...PAIR ---- -----,---- ------- -
The pair is under PSUS status.
d. Resync pair:
# pairresync -g teama -d pair1
# pairdisplay -g teama -fx
Group PairVol(L/R)(Port#,TID,LU),Seq#,LDEV#.P/S,Status,Fence,Seq#,P-LDEV#,M
teama Pair1(L)     (CL1-C,0,  4) 20405 503...PAIR ---- -----,---- ------- -
teama Pair1(R)     (CL1-D,0,  0) 20171 507...PAIR ---- -----,---- ------- -
teama Pair2(L)     (CL1-C,0,  5) 20405 504...PAIR ---- -----,---- ------- -
teama Pair2(R)     (CL1-D,0,  1) 20171 508...PAIR ---- -----,---- ------- -
e. Split pair and return to SMPL state:
# pairsplit -g teama -d pair1 -S
# pairdisplay -g teama  -fx
Group PairVol(L/R)(Port#,TID,LU),Seq#,LDEV#.P/S,Status,Fence,Seq#,P-LDEV#,M
teama Pair1(L)     (CL1-C,0,  4) 20405 503...SMPL ---- -----,---- ------- -
teama Pair1(R)     (CL1-D,0,  0) 20171 507...SMPL ---- -----,---- ------- -
teama Pair2(L)     (CL1-C,0,  5) 20405 504...PAIR ---- -----,---- ------- -
teama Pair2(R)     (CL1-D,0,  1) 20171 508...PAIR ---- -----,---- ------- -
The pair is in SMPL status.


Product
Sun StorageTek 9910 System
Sun StorageTek 9980 System
Sun StorageTek 9970 System
Sun StorageTek 9960 System

SE9900, CCI, ShadowImage, TrueCopy, Raid Manager, HORCM, Cheatsheet
Previously Published As
76960

Change History
Date: 2009-12-02
User Name: 47940
Action: Approved
Comment: Verified Metadata - ok
Verified Keywords - ok
Verified still correct for audience - currently set to contract
Audience left at contract as per FvF at
http://kmo.central/howto/content/voyager-contributor-standards.html
Checked review date - currently set to 2008-04-09
Checked for TM - ok as presented
Publishing under the current publication rules of 18 Apr 2005:
Date: 2007-04-12
User Name: 31620
Action: Approved
Comment: Verified Metadata - ok
Verified Keywords - ok
Verified still correct for audience - currently set to contract
Audience left at contract as per FvF at
http://kmo.central/howto/content/voyager-contributor-standards.html
Checked review date - currently set to 2008-04-09
Checked for TM - ok as presented
Publishing under the current publication rules of 18 Apr 2005:

Attachments
This solution has no attachment
  Copyright © 2011 Sun Microsystems, Inc.  All rights reserved.
 Feedback