Sunday, April 27, 2014

Storage Reclaim ( Celerra & VMAX ) scripts

Celerra:

delete_share document contains all the share names that needs to be removed.  Do all verification carefully before doing reclaims.


1) Verify if there are any active connections to Celerra from client
     Server_netstat server_2
     Look after cifs and nfs connections
     Make sure associate connections will be cleared before reclaim starts

2) Delete exports
    for i in `cat delete_share`; do echo $i; server_export server_2 -unexport -perm $i; done

3) Remove replications if any exists
    nas_replicate -list
    nas_replicate -delete replication_name -mode both/source/destination

4) Unmount file systems
     for i in `cat delete_share`; do echo $i; server_umount server_2 -perm /$i

5) Delete mount points
    for i in `cat delete_mountpoint`; do echo $i; server_mountpoint server_2 -delete /$i; done

6) Delete File systems
    for i in `cat delete_fs`; do echo $i; nas_fs -delete $i

7) Delete Luns from VG8
    for i in `cat delete_luns`; do echo $i; nas_disk -delete device_name -perm
    ** make sure device status in use to false **
    nas_disk -list



1) Delete all filesystems used by VDM
2) Remove VDM
3) Remove file systems not associated with a VDM
4) nas_disk -d to remove all d# disks other than the control luna
5) remove LUNs from storage group of symm channel


VMAX

Reclaiming Storage - VMAX

1. Get the list of thin devices bound in the NAS pool:
symcfg -sid xx show -pool poolname  -thin -detail -gb |grep tdev

2. Get the list of FA ports that these thin devices are mapped to
for i in `cat devs`; do symdev -sid XX show $i |grep FA; done
Validate the mapping info gathered from the above step
symcfg lis -dir XX -p X -address -avail

3. Write disable the devices
for i in `cat devs`; do symdev write_disable $i -celerra -sid XX -nop; done 
Check for no errors on command exit.

4. Unmap the thin devices from all the directors
for i in `cat devs`; do symconfigure -sid XX -cmd "unmap dev $i from dir all:all emulation=CELERRA_FBA;" commit -nop; done 
Check for no errors on command exit.

5. Validate that the unmapping is successfull.
for i in `cat devs`; do symdev show $i -sid XX |grep FA; done
Check for N/A listed for FA on the command exit


6. Unbind the thin devices from the thin pool
for i in `cat devs`; do symconfigure -sid 47 -cmd "unbind tdev $i from pool NAS-AR-R62;" commit -nop; done
Check for no errors on command exit.


7. Validate that the unbinding is successfull.
for i in `cat devs`; do symdev show $i -sid XX |grep Pool; done
Check for N/A listed for Pool on the command exit


8. Get the list of data devices in the NAS pool:
symcfg -sid xx show -pool poolname -thin -detail -gb |grep tdat



9. Disable the data devices from the NAS Pool

for i in `cat debs`; do symconfigure -sid xx -cmd "disable dev $i from pool pool name, type=thin;" commit -nop


10. Reve data devices from the NAS Pool

for i in `cat devs`; do symconfigure -sid XX -cmd "remove dev $i from pool poolname, type=thin;" commit -nop


11. Add the data devices to the Prod SATA Pool
for i in `cat devs`; do symconfigure -sid XX -cmd "add dev $i to pool poolname, type=thin, member_state=ENABLE;" commit -nop


12. Validate that the devices have been successfully added to the Pool 
symcfg -sid xx show -pool PROD-AR-R62 -thin -detail -gb


13. Initiate the Pool Rebalance
symconfigure -sid XX -cmd "start balancing on pool XXXX;" preview/commit -nop 


14. Validate that the Rebalance is successfull.
symcfg -sid xx show -pool PROD-AR-R62 -thin -detail -gb 
Check for "Balancing" for Pool state


14. Verify the growth in the Pool size
symcfg -sid xx show -pool PROD-AR-R62 -thin -detail –gb









Wednesday, April 16, 2014

Robocopy for migrating Cifs shares VG8 to Isilon


Robocopy

Copying cifsshare share from VG8 to Isilon

robocopy \\source.test.abc.com\cifsshare \\destination.test.abc.com\cifsshare  /copyall /E /r:0 /w:0 /log+:backoutrobo.out /tee


Other Examples:

Robocopy C:\Scripts \\RemoteComputerName\Share /E /SEC /R:1 /W:1 /LOG:c:\Robocopylog.txt /TEE  equalent to
C:\Users\user>robocopy \\source.test.abc.com\cifsshare \\destination.test.abc.com\ifs\corpist01\cifsshare /copyall /r:1 /w:1 /log+:backoutrobo.o
ut /tee
robocopy \\source.test.abc.com\cifsshare \\destination.test.abc.com\cifsshare  /copyall /E /r:0 /w:0 /log+:backoutrobo.out /tee
Robocopy \\source.test.abc.com\cifsshare \\destination.test.abc.com\ifs\corpist01\cifsshare /E /SEC /R:1 /W:1 /LOG:c:\Robocopylog.txt /TEE
C:\Users\user>robocopy \\source.test.abc.com\cifsshare \\destination.test.abc.com\cifsshare  /copyall /E /r:1 /w:0 /log+:backoutrobo.out /tee

Description:

I donít recommend to use /CopyAll paremeter as it will also copy the owner information to the remote computer which will cause problem in the
future if the current owner is Administrator on the current computer or a user who doesnot exist.
I also include log function to save the logs in C:\Robocopy.txt file so that you can check for failures afterwards.
The /R and /W are retry options, I specify the retry times as one and wait time as 1 second so that it wonít be stuck on retrying as the default
 setting for retry times is 1 million

Please make sure the account you use to run the Robocopy command has read and write access to the \\Remote_ComputerName\Share.


C:\Users\user>robocopy \\source.test.abc.com\cifsshare \\destination.test.abc.com\cifsshare
 /copyall /lev:1 /r:0 /w:0 /log+:backoutrobo.out /tee

 Log File : C:\Users\user\backoutrobo.out

-------------------------------------------------------------------------------
   ROBOCOPY     ::     Robust File Copy for Windows
-------------------------------------------------------------------------------

  Started : Mon Feb 03 11:31:49 2014

   Source : \\source.test.abc.com\cifsshare\
     Dest : \\destination.test.abc.com\cifsshare\

    Files : *.*

  Options : *.* /TEE /COPYALL /LEV:1 /R:0 /W:0

------------------------------------------------------------------------------

                           1    \\source.test.abc.com\cifsshare\
          *EXTRA File              51285        ramtest.PNG
          *EXTRA File              17920        Thumbs.db
          *EXTRA File             170885        vnxe.PNG
100%        New File               21508        .DS_Store
2014/02/03 11:31:50 ERROR 5 (0x00000005) Copying NTFS Security to Destination File \\source.test.
abc.com\cifsshare\.DS_Store
Access is denied.


------------------------------------------------------------------------------

               Total    Copied   Skipped  Mismatch    FAILED    Extras
    Dirs :         1         0         1         0         0         0
   Files :         1         0         0         0         1         3
   Bytes :    21.0 k         0         0         0    21.0 k   234.4 k
   Times :   0:00:00   0:00:00                       0:00:00   0:00:00

   Ended : Mon Feb 03 11:31:50 2014

 
 
Output:

Migration completed. Logs output

------------------------------------------------------------------------------

               Total    Copied   Skipped  Mismatch    FAILED    Extras
    Dirs :      1540      1539         1         0         0         0
   Files :     11974         0         1         0     11973         3
   Bytes :  78.773 g         0    21.0 k         0  78.773 g   234.4 k
   Times :   0:37:47   0:35:47                       0:00:00   0:01:59

   Ended : Mon Feb 03 12:31:38 2014

Configure EMC Isilon NAS VAAI plugin for vSphere

www.jasemccarty.com/blog/?p=2561

Reclaim Storage on VG8 Gateway

Proper way to remove filesystem from VG8

1) Delete all file systems used by VDM
2) Remove VDM
3) Remove file systems not associated with a VDM
4) nas_disk -d to remove all d# disks other than the control Luns
5) remove LUNs from Storage Group of Symm Channel

Sunday, April 13, 2014

San Addict: Isilon - InsightIQ

San Addict: Isilon - InsightIQ: InsightIQ query requests to Isilon cluster /var/log/apache2/webui_httpd_access.log netstat -an |grep -i wait netstat -an |grep -i es…
netstat -an   |grep -i wait
netstat -an  |grep -i est

Isilon - InsightIQ

InsightIQ query requests to Isilon cluster

/var/log/apache2/webui_httpd_access.log
netstat -an |grep -i wait
netstat -an |grep -i established




Isilon - dtrace analysis

download dtrace_v11.py
upload it to /ifs/data
isi_for_arrya -s -S "nohup python /ifs/data/dtrace_v11.py >&! /dev/null &"


Isilon - LDAP groups

isi auth roles modify --role=systemadmin --add-group ldapname
isi auth roles members list --role=systemadmin

Sudoers File in Isilon - RBAC

 Modify Sudoers File:

Add custom sudoers settings for LDAP groups and users


Main sudoers File: /etc/mcp/templates/sudoers - cannot be modifiable

For creating custom permission settings for LDAP groups,  create new folder  /tmp/sudoers

vi /tmp/sudoers

usr_Alias    HMONITOR=%ldapgroupname
cmd_Alias     ISI_MONITOR = /usr/bin/isi batterysatus*, \
                                                  /usr/bin/isi stat*, \
                                                  /usr/bin/isi status*, \
                                                  /usr/bin/isi_hw_status, \
                                                  /sbin/gmirror status*
HMONITOR ALL=(ALL) NOPASSWD: ISI_MONITOR


cp /tmp/sudoers  /etc/mcp/override


Verify sudoers procedure:   cat /etc/mcp/scripts/sudoers.py

/usr/local/etc/sudoers.d


For Releases after 7.1

Use direct command to edit sudoers file  isi_visudo






Thursday, April 10, 2014

InsightIQ code upgrade from 2.1 to 3.0

If datastore maintained locally on InsightIQ VM:

Upgrade from 2.1 to 3.0 version is not applicable.  First 2.1 data store has to migrate to 2.5 which takes around 2 weeks for upgrade and then from 2.5 to 3.0

Take backup from 2.1 and deploy new 3.0 VM


1) Stop insightIQ service on VM
     iiq_stop

2) Take backup
3) Create Isilon datastore and export to InsightIQ VM
4) copy local datastore copy to mountpoint
5) shutdown VM
6) Remove from Vcenter inventory
7) Deploy new 3.0 VM from Virtual appliance OVA format
8)  Create VM Using old network settings and power on the machine
9) Add Isilon cluster from GUI



    1.  Open an SSH connection(CLI session) to the Isilon cluster you want to create an export.
                     2.  Execute the following command to make a directory for your datastore:
                                # mkdir /ifs/insightiq

                     3.  I have used /ifs/insightiq as an example to perform this operation. Feel free to choose your own path and directory name.
                     4.  Make an export by executing the following command on the Isilon cluster :
                                # isi nfs exports create --paths=/ifs/testiiq --root-clients=<IP address of IIQ 2.5.x> --root-clients=<IP address of IIQ 3.0>

                     5.  Now, open an SSH connection to the old IIQ (v2.5.x) using administrator account.
                     6.  Execute the below command to become root on IIQ:
                                # sudo su -

                     7.  Please stop the InsightIQ service:
                                # sudo stop insightiq

                     8.  Please create a folder in InsightIQ which will act as a mount point for the export
                                # cd /
                                # mkdir data_mount

                     9.  Mount the export from the cluster to IIQ
                                # mount -t nfs <IP address / Smartconnect name of cluster>:/ifs/testiiq /data_mount
                     
                   10.  You can verify the mount by executing the following command:
                                # mount -v
     
                    11.  Please navigate to the local datastore on InsightIQ and then perform copy all the contents of the datastore to mount point using below command:
                                # cd /datastore
                                # tar cf - . | tar -C /data_mount -x
                 
                     12.  After the copy completes, please repeat steps 5 - 10 on the new IIQ (v3.0) VM.
                     13.  Once you have mounted the export please navigate to the mount point and copy all the data to local datastore.
                                # cd /data_mount
                                # tar cf - . | tar -C /datastore -x

                     14.  After the copy completes please upgrade the datastore using command: 
                                # upgrade_iiq_datastore


Commands to start and stop InsightIQ services
sudo stop insightiq
sudo start insightiq



VMAX - Bind devices to pool

symdev -sid abc dev_number pin

VMAX Provisioning

VMAX Provisioning Commands

1)   Create Thin devices
symconfigure -sid abc -cmd "create dev count=40, size=10G, emulation=FBA, config=tdev" preview
symconfigure -sid abc -cmd "create dev count=40, size=10G, emulation=FBA, config=tdev;" commit.

2)   Verify available thin pools
symcfg list -thin -pool

3)   Bind devices to thin pool
symconfigure -sid abc -cmd "bind dev 100 to pool pool_name;" preview
symconfigure -sid abc -cmd "bind dev 100 to pool pool_name." commit

4)  Make sure Zones are in place

5) Verify that you can see the wwn in VMAX
symmask -sid 123 list logins -dir 5e -p 0

6)  Create Aliases for the wwns
symmask -sid 123 -wwn 00000000000000  rename abcd123456789

7)  Create Initiator group and assign wwns
symaccess -sid 123 create -name -ig_initiatorgroup -type init -wwn abcd12345678
symaccess -sid 123 add -name ig_initiatorgroup -type init -wwn abcd87654321

8) Create Port group and assign FA ports
symaccess -sid 123 create -name pg_portgroup -type port -dirport 6e:0
symaccess -sid 123 add -name pg_portgroup -type port -dirport 7e:0

9) Create Storage group and assign thin devices
symaccess -sid 123 create -name sg_storagegroup -type stor -dev 00A1
symaccess -sid 123 add -name sg_storagegroup -type stor -dev 00A2

10) check existing storage group for getting the devices list
symsg list -sid 123
symsg show sg_storagegroup

11) Create Masking view
symaccess -sid 123 create view -name mv_maskingview -sg sg_storagegroup -pg pg_portgroup -ig ig_initiatorgroup




Isilon Commands

Isilon cluster status
isi status

List Serial Numbers of Isilon Nodes
isi_for_array isi_hw_status |grep SerNo

Collect Support Materials
isi_gather_info

Verify Active connections between Isilon interfaces to Client Mountpoints
isi statistics client
netstat -an

Verify Boot drive status
gmirror status
atacontrol list

Verify faulted devices
isi_for_array "isi devices"
isi_for_array -s isi devices | grep -vi healthy
isi_for_array -s isi_hw_status | grep -i serno


To Check the Status of the cluster as a whole
isi status

#To Check the Status of an individual node
isi status status -n <node #>

#To check on the status of disk or node pools
isi -d

#To View currently running jobs (internal processes)
isi job status

#To view current cluster events (these need to be manually cleared)
isi events

#To quiet all events on the cluster
isi events quiet all

#To cancel all events on the cluster (note this does not delete the events)
isi events cancel all


#Some SyncIQ Commands
#Show the status of all currently running jobs (a summary)
isi sync jobs report

#Check on the detailed status of all running jobs
isi sync jobs report -- v

#Manually start a SIQ policy
isi sync policy start <policyname>


#Restart a SIQ policy that had some problems, (still uses snaps, for the incremental)
isi sync policy resolve <policyname>

#View all Cluster Services
isi services -la

#Stop a cluster service (even one that you shouldn’t) {SyncIQ in this example}
isi services -a isi_migrate disable

#Start the same cluster service
isi services -a isi_migrate enable


isi statistics client --nodes all --top

#Show a table of useful live stats
isi statistics pstat

#Show the paths that are most active (a heat map more or less)
isi statistics heat


#A useful script to show all NFS connections on the cluster, lagged by 5 minutes
while
true
;
do
isi statistics client --nodes all --protocol nfs3 --totalby =Node --orderby =
Node; sleep 10; done


#Basics of Screen
#To Launch a new screen session
screen
Hit Enter to accept the EULA >


#To Disconnect from your active screen session
Ctrl-A
<> -1-
then
<
Ctrl-D
>

#To list all screen sessions (per node)
screen screen -ls

#To reconnect to a screen session when only 1 is running
-r

#To reconnect to a screen session, when more than 1 are running
screen -r <sessionid number from the ls command above >


#Run any command on all nodes
isi_for_array <syntax>

#Make this run sequential, rather than parallel, which makes the output easier to understand,
but takes longer
isi_for_array -s <syntax>

#Run the commands on only a subset of nodes
isi_for_array -n 5,6,7,8 <syntax>


#Maintenance tasks
#Smartfail a disk

isi devices -a smartfail -d 5:8

#This is the format of <Nodenumber:DiskNumber>
#Smartfail a node

isi devices -a smartfail -d 5


#This is the entire node, after all data is safely evacuated from
the node, it will be rebooted, and reformatted, and left in an unconfigured state
#Run Hardware Diagnositics on a new cluster
#DO NOT RUN THIS ON A PRODUCTION CLUSTER, it will cause very heavy resource utilization for a
few hours
isi_ovt_check

#Gather Logs for Support
#The default location is in: /ifs/data/Isilon_Support/pkg/
isi_gather_info

#Gather Logs for Support but don't attempt to automatically send them back to support (via FTP,
HTTP or email)
#After this command completes, a path will be outputted where the file is stored, please upload
to support manually
isi_gather_info -- noupload

#Authentication Commands
#List Active Directory Providers configured
isi auth ads list

#View details of one of the resulting domains
isi auth ads view domainname.com --v

#List File Providers
isi auth file list

#Get more information
isi auth file list --v

#View LDAP providers configured
isi auth ldap list

#View NIS providers configured
isi auth nis list -2-



#View how a specific AD user is being mapped, and the groups that are being enumerated
isi auth mapping token domainname\\username


#Also utilizing | head -20 can be helpful, because the most important information is at the top
of the output.

#Access Zone Commands
isi zone zones ls
#Get more detail on all zones
isi zone zones ls --