tw_cli: Command Line Interface Storage Management Software for AMCC/3ware ATA RAID Controller(s).
It is a RAID monitoring utility which helps to maintain the 3Ware RAID array
You can check the health of your 3ware RAID array under any Linux or Windows distribution.
For windows, you access the command line from the tw_cli icon. The commands are the same.
We can either run it as as program with its own command line.
# ./tw_cli
>
/> show
Ctl Model (V)Ports Drives Units NotOpt RRate VRate BBU
————————————————————————
c2 9550SX-4LP 4 4 2 0 1 1 –
This means controller c2 has 4 drives on 4 ports, and all are working fine sinve NotOpt=0
To get the status of the controller c2
//> info c2
Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
——————————————————————————
u0 RAID-5 OK – – 16K 596.025 ON OFF
u1 SINGLE OK – – – 372.519 ON OFF
Port Status Unit Size Blocks Serial
—————————————————————
p0 OK u0 298.09 GB 625142448 5QF0EKAT
p1 OK u0 298.09 GB 625142448 5QF0EKB6
p2 OK u0 298.09 GB 625142448 5QF0EKPP
p3 OK u1 372.61 GB 781422768 WD-WMAMY1596298
Or we can run it as a shell utility as follows.
———————
# ./tw_cli show
Ctl Model (V)Ports Drives Units NotOpt RRate VRate BBU
————————————————————————
c2 9550SX-4LP 4 4 2 0 1 1 –
# ./tw_cli info c2
Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
——————————————————————————
u0 RAID-5 OK – – 16K 596.025 ON OFF
u1 SINGLE OK – – – 372.519 ON OFF
Port Status Unit Size Blocks Serial
—————————————————————
p0 OK u0 298.09 GB 625142448 5QF0EKAT
p1 OK u0 298.09 GB 625142448 5QF0EKB6
p2 OK u0 298.09 GB 625142448 5QF0EKPP
p3 OK u1 372.61 GB 781422768 WD-WMAMY1596298
root@pi [/usr/local/ysa/bin]#
To get a detailed output , we use the command as follows.
./tw_cli info c2 u0
--------------
# ./tw_cli info c2 u0
Unit UnitType Status %RCmpl %V/I/M Port Stripe Size(GB)
————————————————————————
u0 RAID-5 OK – – – 16K 596.025
u0-0 DISK OK – – p2 – 298.013
u0-1 DISK OK – – p1 – 298.013
u0-2 DISK OK – – p0 – 298.013
————————-
Here u0-0 means unit u0, port p0
The above outputs show there are 4 drives in our RAID array, Our array has two units – u0, u1.
Check the below output:
./tw_cli info
Ctl Model (V)Ports Drives Units NotOpt RRate VRate BBU
————————————————————————
c0 8006-2LP 2 2 1 1 3 – –
This means controller c0 has two drives on two ports, one of which has a problem(NotOpt=1).
./tw_cli info c0
Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
——————————————————————————
u0 RAID-1 DEGRADED – – – 139.735 ON –
Port Status Unit Size Blocks Serial
—————————————————————
p0 DEGRADED u0 139.73 GB 293046768 WD-WMAP41084290
p1 OK u0 139.73 GB 293046768 WD-WXC0CA9D2877
Status of the unit uo is DEGRADED
To get the more details.
./tw_cli info c0 u0
we get a slightly different result
Unit UnitType Status %RCmpl %V/I/M Port Stripe Size(GB)
————————————————————————
u0 RAID-1 DEGRADED – – – – 139.735
u0-0 DISK DEGRADED – – – – 139.735
u0-1 DISK OK – – p1 – 139.735
from the it is clear that the disk on port p0 is itself degraded.
This probably means it has errors, but it may just mean it has stopped working properly for another reason, so it may be worth trying to rebuild the array again as follows. Sometimes a rescan will bring the drive back into the array.
./tw_cli maint remove c0 p0
This removes the degraded disk from the array, producing the following output
Removing port /c0/p0 … Done.
If we now run:
./tw_cli info c0 u0
we get a slightly different result
Unit UnitType Status %RCmpl %V/I/M Port Stripe Size(GB)
————————————————————————
u0 RAID-1 DEGRADED – – – – 139.735
u0-0 DISK DEGRADED – – – – 139.735
u0-1 DISK OK – – p1 – 139.735
The only difference here is that disk u0-0 is no longer assigned to port 0. Now you have to find the disk again…
./tw_cli maint rescan c0
Gives the bleak output:
Rescanning controller /c0 for units and drives …Done.
Found the following unit(s): [none].
Found the following drive(s): [none].
This suggests it hasn’t just lost track of the disk, but it really has failed. It may be unseated of course, so get someone to remove it and plug it in again, if possible. Trying the following:
./tw_cli maint remove c0 p0
Gives the output:
Removing port /c0/p0 … Failed.
(0x0B:0x002E): Port empty
Yes. It’s really not there, and it really can’t find it. So either it has become unseated or it is dead.
Another meaning for NOT-PRESENT might be that there is a disk there but it hasn’t been added to any array, or it has failed and is therefore not part of an array, but is still okay. In that case do this:
./tw_cli /c0/p0 export
This comes back with:
Removing /c0/p0 will take the disk offline.
Do you want to continue ? Y|N [N]:
Respond Y and if the disk is okay, you’ll get:
Exporting port /c0/p0 … Done.
Then you can add it to the array again with a maint rescan followed by a maint rebuild.
In our case it responded with:
Removing port /c0/p0 … Failed.
(0x0B:0x002E): Port empty
Which confirms the deadness of the disk.
To check CLI version
//pi> show ver
CLI Version = 2.01.09.004
API Version = 2.06.01.006
//pi>
Resacn & rebuild a DEGRADED RAID array.
tw_cli maint rescan c0
Rescanning controller /c0 for units and drives …Done.
Found the following unit(s): [none].
Found the following drive(s): [/c0/p0].
- tw_cli
//localhost> focus /c0/u1
//localhost/c0/u1> show all
/c0/u1 status = DEGRADED
/c0/u1 is not rebuilding, its current state is DEGRADED
/c0/u1 is not verifying, its current state is DEGRADED
/c0/u1 is not initializing. Its current state is DEGRADED
Unit UnitType Status %Cmpl Port Stripe Size(GB) Blocks
———————————————————————–
u1 RAID-1 DEGRADED – – – 111.79 234439600
u1-0 DISK DEGRADED – – – 111.79 234439600
u1-1 DISK OK – p1 – 111.79 234439600
//localhost/c0/u1> maint rebuild c0 u1 p0
Sending rebuild start request to /c0/u1 on 1 disk(s) [0] … Done.
//localhost/c0/u1> show
Unit UnitType Status %Cmpl Port Stripe Size(GB) Blocks———————————————————————–
u1 RAID-1 OK – – – 111.79 234439600
u1-0 DISK OK – p0 – 111.79 234439600
u1-1 DISK OK – p1 – 111.79 234439600