San Addict
Saturday, May 11, 2024
StorageGrid notes
Day 1 Training
Balancing Clinet access
1) No load balancing
2) DNS Round Robin
3) Connection Load balancer (CLB) service
4) Load balacer service. ( Recommended)
Archive Node - VM -> About to extinct
Optimal Storage Nodes
Administrative Domain Controller ( ADC)
Gateway
ADC SSM
LDR DMV
Query ADC
ADC returns list
Gateway talk to LDR on node based on first one in the list
When writes comes in
1) Goes to optimal
2) copy to second optimal
3) send ack
when read request
1) Query the node
2) Query Cassandra database
3) Find which node has the data
4) send databack to node received request and send to user
5) LDR finds the node
SG6060, SGF6060
SAS All Flash
ILM protection policy rules
Erasure coding
Data pieces
Parity Pieces
Order of sequence
1) Dual commit on write
2) Ack
3) run ILM policy
4) ILM policy is long term protection
ILM policy is the protection policy
Workflow during object replication
ILM engine in the LDR service evaluates ILM policy rules and
determines that an object should be replicated
1) ILM engine sends a replicaiton request to the optimal Dest SN
2) Destination Storage Node LDR retrives object from SN
3) Destination storage node LDR writes to object storage
4) Destation storage node LDR sends object metadata to the DDS service
5) DDS service replicates the metadata and
CMN service - runs on primary. configuration management
while pimary is down -> you can not make any config changes, upgrades
Chapter 2
Storage GRID Grid Manager
1) StorageGrid Topology tree
1) GRID Health
2) Information lifecycle management(ILM) activity
3) client activity
Grid administrators use Grid manager to create
1) Storage tenant accounts,
2) manage ILM policiies and rules
3) configure grid nodes and services
4) perform maintenance
Grid Topology Tree
1) Grid
Site
Grid Node
Node services
Service components
Analyzing storage node SSM service components
Storage Node
Server status Monitor (SSM)
service state
number of threads
CPU Load
amount of memory consumed by the service
Link Cost
Cost of communication between data center sites
ADC uses link cost to determine the Grid node to retrivet he object
0 - 100
Object Transformation Grid Options
1) compressed (LZW algorithm) default off
2) encrypted ( AES-128 or AES-256) default off
3) Segmented
4) object hasing by default SHA-1
5) prevent client modify ( default off)
x-amz-server-side-encryption in the HTTP header to enable encryption per object
Transfermation Option: Segmentation
single control block identifier ( CBID)
object container
segment container that lists the header information of all segments as content
default max segment size is 1 GB
StorageGRID object durability options
Dual commitment
Stored object hashing
prevent client modify
Dual commit
Stored Object hashing
Fingerprinting is used to protect the integrity of stored objects
object hash information stored in content management database
(CMDB)
Distributed Data Store(DDS) service
ILM evaluation
ILM Policy
Object Ingest
Prevent client modify : is a system wide setting
StorageGRID Administrators
root account
Configuring Identity Federation
enable identity federation
2 certificates. one for grid management interface
Storage nodes and API gateway nodes
Obtaining the StorageGRID CA Certificate
Day 2 Training
Storage Tenant Administration:
create Tenant as management unit
Tenants created based on management
Metadata
Grid admin create tenants
Tenant admin create buckets
Bucket contains data
volumes under the nodes are filesystems part of physical
Tenant account based on entities
YOu can setup access between the buckets. relationship can be setup
1) Creating a Tenant account
1) Tenant
2) Create
Allow platform services is disabled
Tenant authentication
local user root account for Tenant
Grid admin knows tenant admin creds
Tenant admin can manage and change password, Tenant admin can block Grid admin from managing
AS grid admin, password can be modified for Tenant
Grid admin configures access to the bucket
Once you login in to tenant admin, you can configure
Identiy federation
Grid admin knows Teanant admin password
Tenant manager webpage
To long into tenant manager webpage add account ID in URL
ex: https:///?accountid=
URL for Tenant manager can be accessed from StorageGRID webpage
root accounts are at Grid level
root accounts are at Tenant level as well
Tenant Manager Dashboard
depends on Quota utilization will be displayed
S3 policy - Allows Group of users access / manage S3 buckets in specific Tenant
you can create multiple groups to manage different set
of buckets to manage
Group Policy:
{
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*" ,
"Resrouce": "arn:aws:s3:::*"
}
}
}
S3 Access Keys
each user of an S3 tenant account must have an access key to store and retrieve objects
Grid admins cannot create bucket
Only Tenant admins create bucket
Quota is set at Tenant level not bucket level
S3 API
only Tenant admin manage buckets
Access key is like username password
compliance can be done at Grid level or Bucket level
no Tenant level
Access keys
Control access to the bucket
Connect to Tenant manager
Create Access Keys and set expiration time
Access Key ID:
Secret Access Key:
either create 1 key for multiple buckets
or create 1 key for each bucket
Bucket access control:
path-style URL:
You will never have to change certificate.
path style URL requests do not include the bucket name in the domain name
virtua-hosted-syle URL requests include the bucket name in the domain name
http://bucket_name.host_name.domain_name
buckets and objects are resources that are accessed by using a unique resource identifier
Cloud Mirror Replication Endpoints
Universal resource identifier ( URI)
Destination host and port
For a StorageGRID destination
API Gateway node for storage node
port 8082
https://dc1-g1.dem.netapp.com:8082
URN: destination S3 bucket
For AWS as destination
arn:aws:s3:::bucket_name
for StorageGRID as destination
urn:sgws:s3:::Bucket_name
Controlling access to buckets and Objects
Bucket policies:
are configured using the S3 REST API
control access from specific users and groups to a bucket and to the objects in the bucket
apply to only one bucket but possibly to multiple users and groups
Group Policies:
are configured using tenant Manager or the tenant managment API
Give the group members access to speicific resources
Apply to only one group but possibly to mulitple buckets
Bucket configuration options:
Creating compliant buckets:
U will need to enable at Grid level to enable at bucket lelve
Bucket details:
Name
Region
compliance
enable compliance check box
retention period
after retention period.
Legal hold : if you check, data becomes undeletable, non modifiable
when you uncheck data can be deletable as per retention policy
consistency level:
Default consistency level default
update database Grid wide
1 copy on B and 1 copy on C
database knows where data is
then ILM policy kicks in
dual commit. , ack, and then ILM rule
when update comes in, it creates new object, does not modify existing
Object consistency is perfromed eventually
for Strong site: increase database replicaiton with in site
for Strong Global: increase database replication across GRID
which could lead to poor performance
Last access time update:
default disable
Plastform services:
compute
network
storage
lambda serverless compute
Notifications: OCR Example
Day 3 Training
ILM policies rules
Grid manager defines protection
Grid manager talk to tenants and how it want to protect
Grid manager only one that makes rules ILM rules
ILM rules ->. Filter and type of protoection
Filter -> what you want to protect
Protection -> 1 copy in A and 1 copy in B
rule1.tenant1.bucket1. 1 copy in A and 1 copy in B
ILM Policy
collection of rules
policies are prioritized
Rule1
rule2
order of rule matters
Filter -> Filter based on anything
protection -> how many copies and where
Filters identify which rule applies to an object
basic rules
advanced rules
11.5 ->
11.3 -> one tenant per rule
Object Last Access Time updates
Advanced Filter:
Metadata Type
Ingest Time
Last access time
Key
Object Size (MB)
user metadata
Location Constraint
Object Tag ( S3 only) - recommended with S3
Key-value pair
key:value are defined by application owner
storage admin usse key value but not create key vaulue
Rule put object in Storage pool
Storage pool is collection of nodes with similar attributes
Site A:
A-Cap-Pool
A-perf-Pool
Stoage poolA
Storage PoolB
Storage poolC
Storage Grades:
default 0
performance 1
capacity 2
secure 3
If you do not assign grade default 0, all are treate dsame
capex - Capital expenditure
opex - operating expenditure
2 different budgets
Erasure Coding:
regionally distributed erasure coding (6+3+
6 data
3 parity
1 GB = 1.5 gb stored (erasure coding)
1 gb = 3 gb stored (multiple copies)
erasure coding drawback is latency
re-assemble packets
copy packets from remote site
during the write you will not experice any latency
for read you will experience any latency
Each site will have thier own gateway
Gateway node for each site -> 2 of them per site
ILM Policy creation:
Define Storage Grades (optional)
Assign Storage grades to Storage Nodes
Configure Storage Pools
Define S3 Regions
Create ILM Rules
Configure the proposed ILM policy
Activate the ILM policy
ILM rule object placement
one copy in DC1
one copy in DC2
one copy in DC3
ILM Rule ingest Behaviour
11.5 balanced is default
Strict, Balanced and dual commit
After policy is created, Add rules to it
always test your policy
Use extreme caution when modifying ILM policies and rules
Always simulate and validate a proposed ILM policy before activating the policy
when a new ILM policy is activated, ILM policy rules are appllied
Any time new rule is made, simulate and test
Verify Object Placement
Lookup section helps to troubleshoot performance issues
it finds where the object is located
rebuild objects
Object metadata lookup.
Object. <>. Lookup
Day 4 Training
Monitoring
Unknown - most severe
current alarms
DCM
Alarm Class Types
There are 3 classes of alarms
Default alarms
Global custom alarms
Custom alarms
Node level alarms
Custom Grid level
Grid Manager Attribute Charts
Grafana
Auto support
Audit logs - gathered by Admin node
not very human readable
netapp has tools that are more readable
command line tool that can be readable
Off, Error, Normal, Debug (Trace logging)
audit logs are stored in /var/local/audit/export
Audit-expalin tool for readable
audit-explain audit.log
audit-explain 2019-08-12.txt.gz
audit node can make the audit log directory accessible to client hosts
To share the autdit log files run CIFS utility
start CIFS configuration utility : config_cifs.rb
For NFS
config_nfs.rb
add-audit-share
add-ip-to-share
validate-config
Monitoring
to Stop and Start service you need to run from CLI
storagegrid-status
Server manager to stop, start, restart services
Stopping and Starting all storageGRID Node services
Stop all node services
/etc/init.d/servermanager stop
Start. /etc/init.d/servermanager start
restart all node services
/etc/init.d/servermanager restart
Physical appliances are extremely robust
For one particular service
service status
service start
service stop
force anode service to stop
sv -w
Thursday, June 28, 2018
NetApp - FlexVol
Flex Volumes
- data volumes
- basic building block of data management
- snapshots are taken at volume level
- specific to a data SVM
CLI commands
mounting on client
- mkdir /mnt/vol1
- mount -t nfs <netappinterface>:/vol1 /mnt/vol1
Creating volume
- volume create -vserver demo -volume vol10 -aggregate aggr4 -size 50MB
- volume mount -vserver demo -volume vol10 -junction-path /vol4
- volume modify -vserver demo -volume vol10 -policy default
- volume create -vserver demo -volume vol50 -aggregate aggr20 -size 500MB -policy default -junction-path /vol50 --> all pieces in on command
Friday, April 20, 2018
Isilon Tasks
Isilon Tasks |
Initial Installations |
Capacity Expansions |
Adding new capacity |
Configuring Network |
Creating Static, Dynamic pools |
Create NFS shares |
Create SMB shares |
Create Quotas |
Create SyncIQ Policies |
Create SnapshotIQ |
Configure SyncIQ failover and Failback |
Dedupe |
Simultaneous code upgrade |
Rolling code upgrade |
Node upgrades |
disk upgrades |
InsightIQ code upgrade |
InsightIQ Virtual appliance deployment |
InsightIQ Linux installation |
Migrations |
rsync |
isi_vol_copy |
isi_vol_copy_vnx |
EMCOPY |
Sunday, April 15, 2018
Pure Storage
Dashboard
Hardware
M10 - 30TB
M20 - 250 TB
M50 - 500 TB
M70 - 1.5 PB
X70 - 1.1. PB
Operating System
Version: 3.0
Health status
latency
IOPS
Bandwidth
Storage
1) Create host - host1
2) Add FC ports
3) create volumes
4) create voluems
5) create clone -> copy snapshot to volume
Replications
1) Create Protection group
2) add members to the group
3) add ESX cluster
4) add targets - > add pure storage array targets
5) define replication policy
SRM
site recovery manager to connect Pure storage to vmware
Analysis for graphs
System
alerts configration, SNMP, system time, directory service,
Host Connections
Plugins - vCenter plug ins
VSS - application consistent snapshots for the windows
Pure storage architecture
Monitoring of the Pure storage through PURE1 Cloud ( similar to ESRS)
Pure 3.0 features
Non-disruptive - > software updates
Capacity expansion
performance expansions
ZeroSnap using vmware xCopy feature
More security through Always on Encryption AES
VAAI xcopy
VAAI thin prov
vSphere plugin
Support model - PureCloud
IBM XIV run guide
IBM XIV
Models:
GUI Tools: XIV storage management
Tasks:
Provision Storage
1) Add Hosts
2) Add HBA ports to Hosts
3) Verify Host connectivity
Use snapshots
1) right click on data volume
2) Create snapshot
snapshot is the pointers to original 1MB data blocks
3) Snapshot restore, copying the snapshot pointers to original data
Mirror - Synchronous, Asynchronous
Monitor and view snapshots
Models:
GUI Tools: XIV storage management
Tasks:
Provision Storage
1) Add Hosts
2) Add HBA ports to Hosts
3) Verify Host connectivity
Use snapshots
1) right click on data volume
2) Create snapshot
snapshot is the pointers to original 1MB data blocks
3) Snapshot restore, copying the snapshot pointers to original data
Mirror - Synchronous, Asynchronous
Monitor and view snapshots
Sunday, October 8, 2017
Tuesday, May 16, 2017
VEDA - helpful links
These are just helpful links. Credits to the owner
Step by step process
http://www.virtualizetips.com/2010/06/28/install-vsphere-esx-4-0-with-eda-deployment-appliance/
https://www.experts-exchange.com/articles/2369/Installing-ESX-through-EDA-multi-VMware-installation.html
Subscribe to:
Posts (Atom)