Asset ID: |
1-72-1011232.1 |
Update Date: | 2010-12-22 |
Keywords: | |
Solution Type
Problem Resolution Sure
Solution
1011232.1
:
Recovering From a Failed SAF-TE Firmware Update on Sun Storage 33x0 RAID Arrays
Related Items |
- Sun Storage 3310 Array
- Sun Storage 3320 SCSI Array
|
Related Categories |
- GCS>Sun Microsystems>Storage - Disk>Modular Disk - 3xxx Arrays
|
PreviouslyPublishedAs
215427
Applies to:
Sun Storage 3310 Array
Sun Storage 3320 SCSI Array
All Platforms
Symptoms
{SYMPTOM}
Symptoms
If a firmware update process is interrupted, or has
some other error, the process can fail on the RAID array, leaving the
EMU modules in a down-rev condition, or in a failed state. The update
process may return messages such as:
SAF-TE Firmware download: one or more modules failed (CH 0 ID 14)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
sccli: download enclosure firmware: error: firmware download failure on some targets
Repeated attempts to update the firmware again, or cross-load from a good EMU will not work,
and may cause the good EMU to go into a failed state.
Changes
{CHANGE}
Cause
{CAUSE}
Solution
ResolutionIn this circumstance, a possible workaround that can be done on-site before replacing
hardware is to power off the array, and bring it up connected as a JBOD. All that is
necessary is to remove both RAID controllers and run cable directly to the IO module on the
tray. If this is an expansion unit, disconnect the SCSI connection to the RAID head and
connect a host cable to one of the expansion ports temporarily.
Once the array is powered on, it will present all drives to the host as regular SCSI disks,
NOT LDs. At this point, you may run devfsadm (or wait for devfsadmd to pick up the
new devices), and verify the drives appear in format. The previous entries for the logical
drives will not be valid, but this is a temporary condition, as a result of the RAID
controller being bypassed.
Once the devices are available in format, sccli in-band can be used to run the SAF-TE
update again. In case you have multiple arrays attached to the host, an explicit device
path can be used (from any of the drives) to direct sccli to the enclosure in question.
The SAF-TE update can now be run against the problem component, and in many instances,
this will work, where the update in a RAID configuration fails. The exact cause for
this is not known, but these steps can be attempted prior to a hardware replacement if
on-site support is not immediately available, or parts are delayed, and can save time
and resources.
After the procedure has been completed, and EMUs are online, the array can be powered-down
and returned to the original configuration. As long as the drives were not directly
manipulated, all existing data should be unaffected. After the array is powered back on,
the administrator can run 'devfsadm -C' to clean up the device paths, and remove the
JBOD entries. This can also be done manually if desired.
Additional InformationCaution must be used in this temporary JBOD configuration. Outside of the firmware update
executed in-band with sccli, you
MUST NOT manipulate the drives in the array in any way
(format, newfs, mount, etc), or data loss may occur.
Attempts to insert one good EMU with one bad EMU to try a crossload have not been
successful, and result in an additional unusable EMU, as the bad EMU seems to bring
down the good one in some cases.
Change History
Date: 2010-12-22
User Name: susan.copeland@oracle.com
Action: Update & Currency review
Attachments
This solution has no attachment