Monday, October 27, 2014

Scripts for running commands accross multiple - Linux

for i in `cat delete_share`; do echo $i; server_export server_2 -unexport -perm $i; done
for i in `cat delete_share`; do echo $i; server_umount server_2 -perm /$i
for i in `cat delete_mountpoint`; do echo $i; server_mountpoint server_2 -delete /$i; done
for i in `cat delete_fs`; do echo $i; nas_fs -delete $i



Isilon - Sync list performance issue

Performance issue may related to synciq reports cluster is retaining.  The recommended amount of reports should be below 8K to have expected WebUI and CLI performance regarding sync policy management.

Total number of reports generated by synciq -
# find /ifs/.ifsvar/modules/tsm/sched/reports -name "report-[0-9]*.gc" |wc -l

determine how many reports are older than a certain period so we can clean these up..
 # find /ifs/.ifsvar/modules/tsm/sched/reports -name "report-[0-9]*.gc" -Btime +1w|wc -l
adjust  to +1d for older than a day, or +2w for older than two weeks.. etc..

command to remove SyncIQ reports that are older than a week:
 find /ifs/.ifsvar/modules/tsm/sched/reports -name "report-[0-9]*.gc" -Btime +1w -exec rm -f {} \;


$ grep "scheduler.schedule" siq-policies.gc|cut -d "|" -f 2|grep -v '""'|sed 's/"//g'|sort |uniq -c
      2 Every 1 days every 15 minutes from 00:00 to 23:59
      1 Every 1 days every 1 minutes from 00:00 to 23:59
      1 Every 1 weeks on Sunday at 1:00 AM
      4 when-source-modified

To modify maximum number of reports for every policy use following commands

To change the max reports: isi sync policy modify <policyname> --max_reports=200
Example: isi sync policy modify test_sync_prod --max_reports=200

After max number is set, response from sync list is very faster.







Tuesday, October 14, 2014

Isilon - Patch Install

Please  verify Readme file that comes with patch every time because patch procedures are different for every type.


1. Open an SSH connection on any node in the cluster and log in using the
    "root" account.

2. Copy the patch-xxxxx file to the /ifs/data directory on the cluster.

3. Run the following command to change to the /ifs/data directory:
 
   cd /ifs/data

4. To extract the patch file, run the following command:

   tar -zxvf patch-xxxxx.tgz

5. To install this patch, run the following command:

   isi pkg install patch-xxxx.tar

6. To verify that this patch is installed, run the following command:
 
   isi pkg info

7. Verify that patch-135046 appears in the list of installed packages.

********************************************************************************

PERFORMING A ROLLING REBOOT

After the patch is installed, manually reboot each node in succession.

1. Open an SSH connection on any node in the cluster and log in using the "root"
   account.

2. Shut down the first node in the cluster by running the following command:

   shutdown -r now
 
3. To monitor the progress of the reboot, run the following command:

   isi status
 
4. Wait for the node to successfully reboot.

5. Repeat steps 2 - 4 for each remaining node.

********************************************************************************
REMOVING THIS PATCH

If you need to remove this patch, complete the steps below.

IMPORTANT!
Read INSTALLATION/REMOVAL IMPACTS before performing this procedure.

1. To delete this patch, run the following command:

   isi pkg delete patch-xxxx

2. To verify that this patch was removed, run the following command:

   isi pkg info

3. Verify that patch-xxxx does not appear in the list of installed packages.

Tuesday, October 7, 2014

Isilon : Enable and Disable Isilon jobs

Disable Isilon running jobs

1. Open an SSH connection on any node in the cluster and log on using the "root" account.
2. Run the following command to disable the Collect job:
isi job types modify collect --enabled false
3. When asked if you are sure you want to modify the job, type yes.
4. Run the following command to disable the MultiScan job:
isi job types modify multiscan --enabled false

5. When asked if you are sure you want to modify the job, type yes

Enable Isilon jobs

1. Open an SSH connection on any node in the cluster and log on using the "root" account.
2. Run the following command to enable the Collect job:
isi job types modify collect --enabled true
3. When asked if you are sure you want to modify the job, type yes.
4. Run the following command to enable the MultiScan job:
isi job types modify multiscan --enabled true
5. When asked if you are sure you want to modify the job, type yes

Isilon : Restart DNS service

Restarting dnsiq is one of the best practice before performing node reboots

# isi_for_array -s 'ps awux|grep dnsiq_d|grep -v grep'

isi_for_array -s 'ps awux|grep dnsiq_d|grep -v grep'
root    1560  0.0  0.5 54740  7064  ??  Ss    1:03PM   0:00.78 /usr/sbin/isi_dnsiq_d

Check for the status 'Ss' is normal (sleep) - 'Is' was Idle, this would be something to look for.
WCPU @ 0:00.78 is very little, but the process just started, check to see if it differs from the dnsiq daemons from other nodes that do not serve smartconnect requests.

To restart the process (if this was from node 1 - specified with the -x 1 to exclude that node from the command):


# isi_for_array -x 1 killall isi_dnsiq_d

Once the process is killed, it will be automatically started by Master Control Process (MCP).




Isilon code upgrade (Rolling/Simultaneous)


Restart dnsiq service before starting upgrade process. Below are the steps.


Restarting dnsiq is one of the best practice before performing node reboots

# isi_for_array -s 'ps awux|grep dnsiq_d|grep -v grep'

isi_for_array -s 'ps awux|grep dnsiq_d|grep -v grep'
root    1560  0.0  0.5 54740  7064  ??  Ss    1:03PM   0:00.78 /usr/sbin/isi_dnsiq_d

Check for the status 'Ss' is normal (sleep) - 'Is' was Idle, this would be something to look for.
WCPU @ 0:00.78 is very little, but the process just started, check to see if it differs from the dnsiq daemons from other nodes that do not serve smartconnect requests.

To restart the process (if this was from node 1 - specified with the -x 1 to exclude that node from the command):


# isi_for_array -x 1 killall isi_dnsiq_d



Implementation Steps:


1) Verify Health status of Isilon cluster
isi status -v
Resolve errors and warning if any exists.

2) Restart the cluster before performing the upgrade. Reboot One node at a time. Restarting the cluster prior to performing the upgrade flushes the caches, frees memory, clears unused connections, and allows you to find and address issues that could impact the upgrade. 

3) Verify available space on the cluster is greater than 10% and the available space on each node is greater than 5%. 

4) Verify hardware status with isi_for_array -s "isi_hw_status". 

5) Resolve errors and outstanding events
view events: isi events list

6) verify boot drive status
isi_for_array -s "gmirror status |grep -i degraded'

7) verify data devices status
isi devices |grep -v healthy

8) Collect cluster logs
isi_gather_info

8) Upload code on to Isilon Using WINSCP to /ifs/data 

9) Open a secure shell (SSH) connection to any node in the cluster and log in using the
root account

10) In the OneFS command-line interface, run the following command, specifying the
installation image file name:
md5 
The command returns an MD5 checksum value.
11) . Compare the MD5 checksum value recorded from the EMC Online Support site against
the MD5 checksum generated from the command-line interface.

12) Perform pre upgrade health check Using command isi update --check only

The system returns a list of any warnings or errors that are present, and then the
following prompt appears:
Please specify the image to update:

13) At the prompt, type the absolute path or URL to the image location and then press
ENTER.

14) Disable CMP and TPSTAT Pods


Rolling

15) Run upgrade isi update --rolling --manual

Simultaneous

15) Run upgrade isi update

16) Enable CMP and TPSTAT Pods

15) Verify version



Validation steps:


Post upgrade steps

1. Check the new version number of the cluster:
uname -a
2. View the status of the cluster and make sure all your nodes are operational:
isi status -D -w
3. Ping all of the cluster's internal and external interfaces to verify network connectivity
and to help verify that SmartConnect works correctly.
4. Review the list of events and address any critical events:
isi events list -w
5. Check the status of jobs and resume the jobs that you paused for the upgrade:
isi job status view
6. Verify your network interfaces:
isi networks list interfaces
7. Verify your subnets:
isi networks list subnets --verbose
8. Verify your pools:
isi networks list pools --verbose
9. Review the cluster's other log files to check for stray problems:
cat /var/log/messages
 10.Review the list of SyncIQ jobs:
isi sync jobs list
11.Check the SyncIQ job reports:
isi sync reports list
12.Review the list of your scheduled snapshots:
isi snapshot schedules list
13.Check the cluster's input and output; type Ctrl-C when you are done:
isi statistics system --nodes --top
14.Check the devices in the nodes to validate the status of your drives:
isi_for_array -s "isi devices | grep -iv healthy"
15.Check your global SMB settings:
isi smb settings global view
16.Check the status of the firmware to ensure that the firmware is consistent across
nodes:
isi firmware status
17.Make sure that all your licenses carried over and remain up to date:
isi license
18.Check the status of your authentication providers to make sure they remain active:
isi auth status --verbose


Wednesday, September 10, 2014

Temparature Front panel - Isilon

Temparature front panel

isi_for_array  isi_hw_status | grep "Temp Front Panel"

lsass test- Isilon

Lsass ldap test
time ls -l

time ls -ln

tcp dump - Isilon

tcpdump -ni vlan0 -s0 -w /ifs/data/Isilon_Support/snmp.pcap -- port 161
tcpdump -ni vlan0 -s0 -- port 161

tcpdump -ni vlan0 -s0 -- udp

Isilon - Iperf

Iperf test
iperf -c 10.100.10.6 -w 262144
iperf -s -w 262144
iperf -s --bind 10.100.9.42 -w 262144


Tuesday, July 1, 2014

Isilon - Insight IQ service restart

sudo stop insightiq
sudo start insightiq

Isilon - GUI restart

isi services -a isi_webui disable
isi services -a isi_webui enable

Isilon - Increasing Quota space on Isilon share

Increasing Quota:

isi quota quotas modify --path <path> --type directory --hard-threshold <size> --advisory-threshold <size>



Isilon - Create NFS Share

Provisioning NFS share

1) Create Directory:
mkdir <path>

2) Create Quota
isi quota quotas create --path <path> --type directory --hard-threshold 1G --advisory-threshold 750 G --container yes --enforced true
--include-snapshots false --thresholds-include-overhead false
Advisory threshold is 70 percent of hard threshold

Example
isi quota quotas create --path /ifs/test_2 --type directory --hard-threshold 2G --advisory-threshold 1G --container yes --include-snapshots false
--thresholds-include-overhead false --enforced true

3) Create NFS Export
isi nfs exports create <paths> --clients <string> --clients <string2> --root-clients <string> --root-clients <string2> --description <string>
--map-root root --security-flavors unix --force no --all-dirs yes

example:
isi nfs exports create /ifs/test_2 --clients 10.xx.xx.xx --clients 10.xx.xx.xx --root-clients 10.xx.xx.xx --root-clients 10.252.9.134 
--description "test nfs export" --map-root root --security-flavors unix --all-dirs yes



Tuesday, June 17, 2014

Vblock - Simulators

Isilon - Isilon simulator
VNX - VNX simulators
UCS - UCS emulator
Switches: cisco packet tracer
VMware - VMware workstation/vCenter/ESXihosts/vSphere client
Storage - Storwind/Freenas

Thursday, June 12, 2014

Logs file locations- Isilon

/var/log/messages :- common for every component
/var/log/lsassd.log :- LDAP logs
tail insightiq_access.log - InsightIQ logs


Messages
tail -f /var/log/messages


Logs going out from Isilon

less /var/log/isi_celog_notification.log  
tail -f /var/log/isi_celog_coalescer.log
isi_for_array -s 'ps auwx | grep celog | grep -v grep'

isi_for_array -n2 'tail -100 /var/log/isi_celog_notification.log'




Wednesday, May 14, 2014

Get List of VM's on server:  Get-VM
VM detailed information: Get-VM | Format-List -property *
Customized VM's report: Get-VM | select-object -property Name,Notes, VMHost,Guest

Suppress warnings:

Set-Powercliconfiguration -Displaydepricationwarnings $false -scope -user


Using WildCards:
Select Specific virtual Machine: Get-VM -Name A*

Using Comparision Operations
-eq
-ne
-gt
-ge
-lt
-le


Using Aliases


Retriving list of all of hosts
get-Hosts

PowerCLI - Initial connection script commands

Connecting to a server:


                connect-VIserver -server 192.168.10.10 -credential $credential
                                    connect-VIserver -server 192.168.10.10 -protocol http
Connecting to multiple servers:  connect-VIserver -server 192.168.10.10 -credential $credential -ALLlinked

List all connected vCenters:  connect-VIserver -Menu

Connect to multiple vCenters:  connect-VIserver -server vCenter1,vCenter2,vCenter3

Current configuration mode: get-PowerCLIConfiguration
Set configuration mode: single or multple vCenters
set-PowerCliconfiguration
set-PowerCliconfiguration -DefaultVIServermode signle -scope user

The servers are stored in $global:DefaultServer

Suppress Certificate warnings
set-PowerCliConfiguration -InvalidCertificateAction Ignore



NFS Datastore

Mount

New-Datastore -VMHost abc.xyz.com -Nfs -Name Test -NfsHost nfsinterface.isilon.com -Path /ifs/datastore/templates

Remove Mount

Get-VMHost | Remove-Datastore -Datastore templates


Configure DNS Address

Get-VMHostNetwork | Get-VMHost | Set-VMHostNetwork -DomainName test.abc.com -DNSAddress  10.10.10.10, 10.20.20.20

Get-VMHostNetwork -VMHost host.test.abc.com | Set-VMHostNetwork -DomainName test.abc.com -DNSAddress 10.10.10.10, 10.20.20.20

Configure Syslog Datastore

Get-VMHost | Set-AdvancedSetting -NameValue @{'Config.HostAgent.log.level'='info';'Vpx.Vpxa.config.log.level'='info';'Syslog.global.logHost'='udp://sysng-te.test.statefarm.com:514'}


Create NFS adapters:

New-VMHostNetworkAdapter -VMHost host.abc.com -PortGroup INFR_ISILON_VLAN11 -VirtualSwitch switchname  -IP 10.10.10.10 -SubnetMask 255.255.255.0 -MTU 9000



Disconnect from Servers:

disconnect-VIserver -server * -Force


Get Credentials:

get-VIcredentialStoreItem

some useful links: hostilecoding.blogspot.com/2014/03/vmware-powercli-to-report-triggered.html
https://www.simple-talk.com/sysadmin/virtualization/10-steps-to-kick-start-your-vmware-automation-with-powercli/


Restarting Powerpath Watch dog service


Get-VMHost -name corpesx2g.test.statefarm.com | Get-VMHostService | where {$_.Key -match "sfcbd-watchdog"} | Restart-VMHostService -Confirm:$false


Enable and disable lock down mode on ESXi Host

(get-vmhost $ESXhost | get-view).ExitLockdownMode() # To DISABLE Lockdown Mode
(get-vmhost $ESXhost | get-view).EnterLockdownMode() # To ENABLE Lockdown Mode


Script for running commands across all hosts in vcenter

$vCenter = 'vCenterServer_Name_or_IP_address'
Connect-VIServer $vCenter
$Scope = Get-VMHost #This will change the Lockdown Mode on all hosts managed by vCenter
foreach ($ESXhost in $Scope) {
(get-vmhost $ESXhost | get-view).ExitLockdownMode() # To DISABLE Lockdown Mode

Get-VMHost -name corpesx2g.test.statefarm.com | Get-VMHostService | where {$_.Key -match "sfcbd-watchdog"} | Restart-VMHostService -Confirm:$false

#(get-vmhost $ESXhost | get-view).EnterLockdownMode() # To ENABLE Lockdown Mode
}

Disconnect-VIServer -Server $vCenter -Confirm:$false







Monday, May 5, 2014

VMAX - Powerpath, Unisphere, Solution Enabler Installation ( SE Install)

Powerpath Host Registartion

rpowermt register host=abc.xyz.com username=root password=xxxx

Powerpath Installation Procedure


1) Uninstall ELMS
2) Uninstall Rtools
3) verify service running service

From task manager

check wheather lmrgd is running from service
if running stop

4) check for license path

EMC - powerpath -rpowermt

move license to d:powerpath_server

5) Check for services


Check for lmrg path

6) disable emc powerpath (EMC_PP_LIC)

from services - powerpath properties

7) INSTALL POWERPATH ELMS

c:programfiles\emc\
choose powerpath license path
install

8) Install Rtools

windows folder
install

organization : statefarm
location default

install


9)

rpowermt version
check for license path
if it is C: change it to d using environmental settings

10) Change Environmental variables

control panel
system properties
PPMT_lic_path

D:\powerpath_version\filename

11)

verify rpowermt version

12) Verify LMTOOLS
Verify service name should be only LMTOOLS
programfiles(86)\emc\elms\

13) register esx hosts

Solution Enabler Installation (SE Install)

1. Stop ECOM service.
2. Make a backup of these folders:
On Windows:
C:\Program Files\EMC\ECIM\ECOM\conf\cst
C:\Program Files\EMC\ECIM\ECOM\conf\ssl
3. Uninstall existing version of SE and SMI provider. Sometimes we need to delete the symapi folders manually and then reboot the host
4. Install SE with SMI-S provider
Reboot the host and start the required daemons
5.Copy the backup(you made) folders mentioned in step 2 with the .
6. Start ECOM service.
7. Put back the deamon_options and deamon_users files back without replacing them
8. Restart the all the daemons order being storapid first
And verify storsrvd is running


Unisphere Install

Unisphere for VMAX: on D drive
Run the install.exe file --> Takes around 30 mins and install Flash if needed
Register the array for collecting stats
Login-->Performance tab-->settings-->select array and register

Requirement:
RAM-8Gb available
Disk space-25Gb available

Alert suppression:
AS a part of matrix upgrade, unisphere is getting upgraded to new code 1.6. As a part of installation process, there might be some alerts generated that needs
to be suppressed.

Implementation steps:
1. Log into the arms server "arms.test.com"
2. Uninstall SMAS 1.3.3 application through control panel
3. Run the Installation file "UNIVMAX_V1.6.0.8_WINDOWS_X86_64". Select D drive
4. After installation, Register the array into Unisphere for collecting stats as shown below
i) Login to Unisphere using "https://192.168.10.10:8443/"
ii) Select Performance tab
iii) Select settings
iv) Select the VMAX array and click register









VMAX - Environment Status (drive, Hardware)

check environment details like status of hardware and disks

symcfg list -env_data
symdisk list -failed
symcfg list -env_data -service_state notnormal

VMAX - VLUN Migration

Vlun migration commands

symcfg -sid 60 list -tdev -bound -detail -dev 9A9 -gb
symmigrate -name new_migration query -sid 60
symmigrate list
symmigrate -sid 47 -name new_migration1 -f file_name -tgt_pool FC_poolname validate
symmigrate -sid 47 -name new_migration1 -f file_name -tgt_pool FC_poolname establish

notepad file_name.txt
dir - to list files
symdev -sid 40 pin dev_id
symcfg  -sid list -pool -thin -detail -gb


Friday, May 2, 2014

Isilon - truss command - tracking status

truss is a solaris command work on BSD's (isilon OS) which track the work flow of a command execution


truss -fea isi sync policies list
truss -fea "command"


The equivalent command in Linux is   strace



Linux - Print First row

server_mount server-2 | sed 's/\|/ /'|awk '{print $1}'

Remove spl characters < >

server_mount server_2 |sed 's/\|//'awk '{print $1}' | sed 's/[<>]//g'

Thursday, May 1, 2014

VG8 - nas_checkup automatic scheduler

Remove automatic nas_checkup schedule from VG8

cat /nas/site/cron.d/nas_sys
remove following line
51 3 * * 7 root /nas/tools/auto_checkup > /dev/null 2>&1



Sunday, April 27, 2014

Storage Reclaim ( Celerra & VMAX ) scripts

Celerra:

delete_share document contains all the share names that needs to be removed.  Do all verification carefully before doing reclaims.


1) Verify if there are any active connections to Celerra from client
     Server_netstat server_2
     Look after cifs and nfs connections
     Make sure associate connections will be cleared before reclaim starts

2) Delete exports
    for i in `cat delete_share`; do echo $i; server_export server_2 -unexport -perm $i; done

3) Remove replications if any exists
    nas_replicate -list
    nas_replicate -delete replication_name -mode both/source/destination

4) Unmount file systems
     for i in `cat delete_share`; do echo $i; server_umount server_2 -perm /$i

5) Delete mount points
    for i in `cat delete_mountpoint`; do echo $i; server_mountpoint server_2 -delete /$i; done

6) Delete File systems
    for i in `cat delete_fs`; do echo $i; nas_fs -delete $i

7) Delete Luns from VG8
    for i in `cat delete_luns`; do echo $i; nas_disk -delete device_name -perm
    ** make sure device status in use to false **
    nas_disk -list



1) Delete all filesystems used by VDM
2) Remove VDM
3) Remove file systems not associated with a VDM
4) nas_disk -d to remove all d# disks other than the control luna
5) remove LUNs from storage group of symm channel


VMAX

Reclaiming Storage - VMAX

1. Get the list of thin devices bound in the NAS pool:
symcfg -sid xx show -pool poolname  -thin -detail -gb |grep tdev

2. Get the list of FA ports that these thin devices are mapped to
for i in `cat devs`; do symdev -sid XX show $i |grep FA; done
Validate the mapping info gathered from the above step
symcfg lis -dir XX -p X -address -avail

3. Write disable the devices
for i in `cat devs`; do symdev write_disable $i -celerra -sid XX -nop; done 
Check for no errors on command exit.

4. Unmap the thin devices from all the directors
for i in `cat devs`; do symconfigure -sid XX -cmd "unmap dev $i from dir all:all emulation=CELERRA_FBA;" commit -nop; done 
Check for no errors on command exit.

5. Validate that the unmapping is successfull.
for i in `cat devs`; do symdev show $i -sid XX |grep FA; done
Check for N/A listed for FA on the command exit


6. Unbind the thin devices from the thin pool
for i in `cat devs`; do symconfigure -sid 47 -cmd "unbind tdev $i from pool NAS-AR-R62;" commit -nop; done
Check for no errors on command exit.


7. Validate that the unbinding is successfull.
for i in `cat devs`; do symdev show $i -sid XX |grep Pool; done
Check for N/A listed for Pool on the command exit


8. Get the list of data devices in the NAS pool:
symcfg -sid xx show -pool poolname -thin -detail -gb |grep tdat



9. Disable the data devices from the NAS Pool

for i in `cat debs`; do symconfigure -sid xx -cmd "disable dev $i from pool pool name, type=thin;" commit -nop


10. Reve data devices from the NAS Pool

for i in `cat devs`; do symconfigure -sid XX -cmd "remove dev $i from pool poolname, type=thin;" commit -nop


11. Add the data devices to the Prod SATA Pool
for i in `cat devs`; do symconfigure -sid XX -cmd "add dev $i to pool poolname, type=thin, member_state=ENABLE;" commit -nop


12. Validate that the devices have been successfully added to the Pool 
symcfg -sid xx show -pool PROD-AR-R62 -thin -detail -gb


13. Initiate the Pool Rebalance
symconfigure -sid XX -cmd "start balancing on pool XXXX;" preview/commit -nop 


14. Validate that the Rebalance is successfull.
symcfg -sid xx show -pool PROD-AR-R62 -thin -detail -gb 
Check for "Balancing" for Pool state


14. Verify the growth in the Pool size
symcfg -sid xx show -pool PROD-AR-R62 -thin -detail –gb









Wednesday, April 16, 2014

Robocopy for migrating Cifs shares VG8 to Isilon


Robocopy

Copying cifsshare share from VG8 to Isilon

robocopy \\source.test.abc.com\cifsshare \\destination.test.abc.com\cifsshare  /copyall /E /r:0 /w:0 /log+:backoutrobo.out /tee


Other Examples:

Robocopy C:\Scripts \\RemoteComputerName\Share /E /SEC /R:1 /W:1 /LOG:c:\Robocopylog.txt /TEE  equalent to
C:\Users\user>robocopy \\source.test.abc.com\cifsshare \\destination.test.abc.com\ifs\corpist01\cifsshare /copyall /r:1 /w:1 /log+:backoutrobo.o
ut /tee
robocopy \\source.test.abc.com\cifsshare \\destination.test.abc.com\cifsshare  /copyall /E /r:0 /w:0 /log+:backoutrobo.out /tee
Robocopy \\source.test.abc.com\cifsshare \\destination.test.abc.com\ifs\corpist01\cifsshare /E /SEC /R:1 /W:1 /LOG:c:\Robocopylog.txt /TEE
C:\Users\user>robocopy \\source.test.abc.com\cifsshare \\destination.test.abc.com\cifsshare  /copyall /E /r:1 /w:0 /log+:backoutrobo.out /tee

Description:

I donít recommend to use /CopyAll paremeter as it will also copy the owner information to the remote computer which will cause problem in the
future if the current owner is Administrator on the current computer or a user who doesnot exist.
I also include log function to save the logs in C:\Robocopy.txt file so that you can check for failures afterwards.
The /R and /W are retry options, I specify the retry times as one and wait time as 1 second so that it wonít be stuck on retrying as the default
 setting for retry times is 1 million

Please make sure the account you use to run the Robocopy command has read and write access to the \\Remote_ComputerName\Share.


C:\Users\user>robocopy \\source.test.abc.com\cifsshare \\destination.test.abc.com\cifsshare
 /copyall /lev:1 /r:0 /w:0 /log+:backoutrobo.out /tee

 Log File : C:\Users\user\backoutrobo.out

-------------------------------------------------------------------------------
   ROBOCOPY     ::     Robust File Copy for Windows
-------------------------------------------------------------------------------

  Started : Mon Feb 03 11:31:49 2014

   Source : \\source.test.abc.com\cifsshare\
     Dest : \\destination.test.abc.com\cifsshare\

    Files : *.*

  Options : *.* /TEE /COPYALL /LEV:1 /R:0 /W:0

------------------------------------------------------------------------------

                           1    \\source.test.abc.com\cifsshare\
          *EXTRA File              51285        ramtest.PNG
          *EXTRA File              17920        Thumbs.db
          *EXTRA File             170885        vnxe.PNG
100%        New File               21508        .DS_Store
2014/02/03 11:31:50 ERROR 5 (0x00000005) Copying NTFS Security to Destination File \\source.test.
abc.com\cifsshare\.DS_Store
Access is denied.


------------------------------------------------------------------------------

               Total    Copied   Skipped  Mismatch    FAILED    Extras
    Dirs :         1         0         1         0         0         0
   Files :         1         0         0         0         1         3
   Bytes :    21.0 k         0         0         0    21.0 k   234.4 k
   Times :   0:00:00   0:00:00                       0:00:00   0:00:00

   Ended : Mon Feb 03 11:31:50 2014

 
 
Output:

Migration completed. Logs output

------------------------------------------------------------------------------

               Total    Copied   Skipped  Mismatch    FAILED    Extras
    Dirs :      1540      1539         1         0         0         0
   Files :     11974         0         1         0     11973         3
   Bytes :  78.773 g         0    21.0 k         0  78.773 g   234.4 k
   Times :   0:37:47   0:35:47                       0:00:00   0:01:59

   Ended : Mon Feb 03 12:31:38 2014

Configure EMC Isilon NAS VAAI plugin for vSphere

www.jasemccarty.com/blog/?p=2561

Reclaim Storage on VG8 Gateway

Proper way to remove filesystem from VG8

1) Delete all file systems used by VDM
2) Remove VDM
3) Remove file systems not associated with a VDM
4) nas_disk -d to remove all d# disks other than the control Luns
5) remove LUNs from Storage Group of Symm Channel

Sunday, April 13, 2014

San Addict: Isilon - InsightIQ

San Addict: Isilon - InsightIQ: InsightIQ query requests to Isilon cluster /var/log/apache2/webui_httpd_access.log netstat -an |grep -i wait netstat -an |grep -i es…
netstat -an   |grep -i wait
netstat -an  |grep -i est

Isilon - InsightIQ

InsightIQ query requests to Isilon cluster

/var/log/apache2/webui_httpd_access.log
netstat -an |grep -i wait
netstat -an |grep -i established




Isilon - dtrace analysis

download dtrace_v11.py
upload it to /ifs/data
isi_for_arrya -s -S "nohup python /ifs/data/dtrace_v11.py >&! /dev/null &"


Isilon - LDAP groups

isi auth roles modify --role=systemadmin --add-group ldapname
isi auth roles members list --role=systemadmin

Sudoers File in Isilon - RBAC

 Modify Sudoers File:

Add custom sudoers settings for LDAP groups and users


Main sudoers File: /etc/mcp/templates/sudoers - cannot be modifiable

For creating custom permission settings for LDAP groups,  create new folder  /tmp/sudoers

vi /tmp/sudoers

usr_Alias    HMONITOR=%ldapgroupname
cmd_Alias     ISI_MONITOR = /usr/bin/isi batterysatus*, \
                                                  /usr/bin/isi stat*, \
                                                  /usr/bin/isi status*, \
                                                  /usr/bin/isi_hw_status, \
                                                  /sbin/gmirror status*
HMONITOR ALL=(ALL) NOPASSWD: ISI_MONITOR


cp /tmp/sudoers  /etc/mcp/override


Verify sudoers procedure:   cat /etc/mcp/scripts/sudoers.py

/usr/local/etc/sudoers.d


For Releases after 7.1

Use direct command to edit sudoers file  isi_visudo






Friday, April 11, 2014

Thursday, April 10, 2014

InsightIQ code upgrade from 2.1 to 3.0

If datastore maintained locally on InsightIQ VM:

Upgrade from 2.1 to 3.0 version is not applicable.  First 2.1 data store has to migrate to 2.5 which takes around 2 weeks for upgrade and then from 2.5 to 3.0

Take backup from 2.1 and deploy new 3.0 VM


1) Stop insightIQ service on VM
     iiq_stop

2) Take backup
3) Create Isilon datastore and export to InsightIQ VM
4) copy local datastore copy to mountpoint
5) shutdown VM
6) Remove from Vcenter inventory
7) Deploy new 3.0 VM from Virtual appliance OVA format
8)  Create VM Using old network settings and power on the machine
9) Add Isilon cluster from GUI



    1.  Open an SSH connection(CLI session) to the Isilon cluster you want to create an export.
                     2.  Execute the following command to make a directory for your datastore:
                                # mkdir /ifs/insightiq

                     3.  I have used /ifs/insightiq as an example to perform this operation. Feel free to choose your own path and directory name.
                     4.  Make an export by executing the following command on the Isilon cluster :
                                # isi nfs exports create --paths=/ifs/testiiq --root-clients=<IP address of IIQ 2.5.x> --root-clients=<IP address of IIQ 3.0>

                     5.  Now, open an SSH connection to the old IIQ (v2.5.x) using administrator account.
                     6.  Execute the below command to become root on IIQ:
                                # sudo su -

                     7.  Please stop the InsightIQ service:
                                # sudo stop insightiq

                     8.  Please create a folder in InsightIQ which will act as a mount point for the export
                                # cd /
                                # mkdir data_mount

                     9.  Mount the export from the cluster to IIQ
                                # mount -t nfs <IP address / Smartconnect name of cluster>:/ifs/testiiq /data_mount
                     
                   10.  You can verify the mount by executing the following command:
                                # mount -v
     
                    11.  Please navigate to the local datastore on InsightIQ and then perform copy all the contents of the datastore to mount point using below command:
                                # cd /datastore
                                # tar cf - . | tar -C /data_mount -x
                 
                     12.  After the copy completes, please repeat steps 5 - 10 on the new IIQ (v3.0) VM.
                     13.  Once you have mounted the export please navigate to the mount point and copy all the data to local datastore.
                                # cd /data_mount
                                # tar cf - . | tar -C /datastore -x

                     14.  After the copy completes please upgrade the datastore using command: 
                                # upgrade_iiq_datastore


Commands to start and stop InsightIQ services
sudo stop insightiq
sudo start insightiq



VMAX - Bind devices to pool

symdev -sid abc dev_number pin

VMAX Provisioning

VMAX Provisioning Commands

1)   Create Thin devices
symconfigure -sid abc -cmd "create dev count=40, size=10G, emulation=FBA, config=tdev" preview
symconfigure -sid abc -cmd "create dev count=40, size=10G, emulation=FBA, config=tdev;" commit.

2)   Verify available thin pools
symcfg list -thin -pool

3)   Bind devices to thin pool
symconfigure -sid abc -cmd "bind dev 100 to pool pool_name;" preview
symconfigure -sid abc -cmd "bind dev 100 to pool pool_name." commit

4)  Make sure Zones are in place

5) Verify that you can see the wwn in VMAX
symmask -sid 123 list logins -dir 5e -p 0

6)  Create Aliases for the wwns
symmask -sid 123 -wwn 00000000000000  rename abcd123456789

7)  Create Initiator group and assign wwns
symaccess -sid 123 create -name -ig_initiatorgroup -type init -wwn abcd12345678
symaccess -sid 123 add -name ig_initiatorgroup -type init -wwn abcd87654321

8) Create Port group and assign FA ports
symaccess -sid 123 create -name pg_portgroup -type port -dirport 6e:0
symaccess -sid 123 add -name pg_portgroup -type port -dirport 7e:0

9) Create Storage group and assign thin devices
symaccess -sid 123 create -name sg_storagegroup -type stor -dev 00A1
symaccess -sid 123 add -name sg_storagegroup -type stor -dev 00A2

10) check existing storage group for getting the devices list
symsg list -sid 123
symsg show sg_storagegroup

11) Create Masking view
symaccess -sid 123 create view -name mv_maskingview -sg sg_storagegroup -pg pg_portgroup -ig ig_initiatorgroup




Isilon Commands

Isilon cluster status
isi status

List Serial Numbers of Isilon Nodes
isi_for_array isi_hw_status |grep SerNo

Collect Support Materials
isi_gather_info

Verify Active connections between Isilon interfaces to Client Mountpoints
isi statistics client
netstat -an

Verify Boot drive status
gmirror status
atacontrol list

Verify faulted devices
isi_for_array "isi devices"
isi_for_array -s isi devices | grep -vi healthy
isi_for_array -s isi_hw_status | grep -i serno


To Check the Status of the cluster as a whole
isi status

#To Check the Status of an individual node
isi status status -n <node #>

#To check on the status of disk or node pools
isi -d

#To View currently running jobs (internal processes)
isi job status

#To view current cluster events (these need to be manually cleared)
isi events

#To quiet all events on the cluster
isi events quiet all

#To cancel all events on the cluster (note this does not delete the events)
isi events cancel all


#Some SyncIQ Commands
#Show the status of all currently running jobs (a summary)
isi sync jobs report

#Check on the detailed status of all running jobs
isi sync jobs report -- v

#Manually start a SIQ policy
isi sync policy start <policyname>


#Restart a SIQ policy that had some problems, (still uses snaps, for the incremental)
isi sync policy resolve <policyname>

#View all Cluster Services
isi services -la

#Stop a cluster service (even one that you shouldn’t) {SyncIQ in this example}
isi services -a isi_migrate disable

#Start the same cluster service
isi services -a isi_migrate enable


isi statistics client --nodes all --top

#Show a table of useful live stats
isi statistics pstat

#Show the paths that are most active (a heat map more or less)
isi statistics heat


#A useful script to show all NFS connections on the cluster, lagged by 5 minutes
while
true
;
do
isi statistics client --nodes all --protocol nfs3 --totalby =Node --orderby =
Node; sleep 10; done


#Basics of Screen
#To Launch a new screen session
screen
Hit Enter to accept the EULA >


#To Disconnect from your active screen session
Ctrl-A
<> -1-
then
<
Ctrl-D
>

#To list all screen sessions (per node)
screen screen -ls

#To reconnect to a screen session when only 1 is running
-r

#To reconnect to a screen session, when more than 1 are running
screen -r <sessionid number from the ls command above >


#Run any command on all nodes
isi_for_array <syntax>

#Make this run sequential, rather than parallel, which makes the output easier to understand,
but takes longer
isi_for_array -s <syntax>

#Run the commands on only a subset of nodes
isi_for_array -n 5,6,7,8 <syntax>


#Maintenance tasks
#Smartfail a disk

isi devices -a smartfail -d 5:8

#This is the format of <Nodenumber:DiskNumber>
#Smartfail a node

isi devices -a smartfail -d 5


#This is the entire node, after all data is safely evacuated from
the node, it will be rebooted, and reformatted, and left in an unconfigured state
#Run Hardware Diagnositics on a new cluster
#DO NOT RUN THIS ON A PRODUCTION CLUSTER, it will cause very heavy resource utilization for a
few hours
isi_ovt_check

#Gather Logs for Support
#The default location is in: /ifs/data/Isilon_Support/pkg/
isi_gather_info

#Gather Logs for Support but don't attempt to automatically send them back to support (via FTP,
HTTP or email)
#After this command completes, a path will be outputted where the file is stored, please upload
to support manually
isi_gather_info -- noupload

#Authentication Commands
#List Active Directory Providers configured
isi auth ads list

#View details of one of the resulting domains
isi auth ads view domainname.com --v

#List File Providers
isi auth file list

#Get more information
isi auth file list --v

#View LDAP providers configured
isi auth ldap list

#View NIS providers configured
isi auth nis list -2-



#View how a specific AD user is being mapped, and the groups that are being enumerated
isi auth mapping token domainname\\username


#Also utilizing | head -20 can be helpful, because the most important information is at the top
of the output.

#Access Zone Commands
isi zone zones ls
#Get more detail on all zones
isi zone zones ls --