Quantcast
Channel: NetApp – The Practical Administrator
Viewing all 16 articles
Browse latest View live

NetApp PowerShell Toolkit 101: Getting Started

$
0
0

The NetApp PowerShell Toolkit (NPTK) is a great way to get started administering your NetApp resources, both 7-mode and clustered Data ONTAP (cDOT), in a more efficient and scalable manner.

Getting the Toolkit

The download (version 3.2 at the time of this writing) is available from the NetApp Communities in the Microsoft Cloud and Virtualization board.

From the download page are two links to some great resources: the Getting Started presentation, and Making the Most of the NetApp PowerShell Toolkit. Both of these are excellent reads if you want some starting hints.

Getting Help

  • The NetApp Communities: The communities are a great place to get help quickly for any question you might have. I recommend that you use the Microsoft Cloud and Virtualization Discussions board, however the SDK and API board will infrequently have questions as well.You can also send me a message using the NetApp Communities. My username is asulliva, and I’m happy to respond to questions directly through the Communities messaging system.
  • From the NPTK itself: One of the less known features of the Toolkit is that it has help built in. Yes, you can use the standard Get-Help cmdlet, but there’s a hidden treasure: Show-NcHelp.This cmdlet will generate an HTML version of the cmdlet help and open your default browser to display it.

    2015-02-13 18_52_42-

    From here you can dig through the cmdlets and view all of the information you want to know about them quickly and easily.

A Few Basics To Get Started

Now that you have the toolkit and have installed it, it’s time to use it. Let’s look at a couple of basic tasks.

Note: I will be using the cDOT cmdlets, however nearly all of the commands have an equivalent available for 7-mode.

Connecting to a controller
Connecting to your cluster is extremely easy. You can specify the cluster management IP address, or any of the node management IPs as well. If you do not provide credentials as a part of the command invocation, it will prompt for them.

# connect to the cluster management LIF
Connect-NcController $controllerNameOrIp -Credential (Get-Credential)

Getting Information
Now that we’re connected to the cluster, let’s take a look at some of the information that can be gathered:

# show cluster information
Get-NcCluster
 
# show node information
Get-NcNode
 
# show the number of disks assigned to each controller
Get-NcDisk | %{ $_.DiskOwnershipInfo.HomeNodeName } | Group-Object
 
# show a summary of disk status
Get-NcDisk | %{ $_.DiskRaidInfo.ContainerType } | Group-Object
 
# show failed disks
Get-NcDisk | ?{ $_.DiskRaidInfo.ContainerType -eq "broken" }
 
# show root aggregates
Get-NcAggr | ?{ $_.AggrRaidAttributes.HasLocalRoot -eq $true }
 
# show volumes which are not SVM root volumes
Get-NcVol | ?{ $_.VolumeStateAttributes.IsVserverRoot -eq $false }

Onward to Automation

This doesn’t even begin to scratch the surface of the NetApp PowerShell Toolkit. Anything that can be done from the command line can be done using the toolkit. If you’re interested in seeing specific examples, need help, or just have questions, please let me know in the comments!

The post NetApp PowerShell Toolkit 101: Getting Started appeared first on The Practical Administrator.


NetApp PowerShell Toolkit 101: Cluster Configuration

$
0
0

Using the NetApp PowerShell Toolkit (NPTK) can sometimes be a daunting task. Fortunately, it is pretty intuitive on how to configure most aspects of your storage system. Let’s start by looking at some of the cluster level configuration items that can be managed using the NTPK.

In this post we will cover:

  • AutoSupport
  • Licenses
  • Cluster Management LIF(s)
  • Inter-Cluster LIF(s)
  • SNMP
  • DNS

AutoSupport
Configuring AutoSupport is one of the most important things you can do for your system. AutoSupport enables the system to contact NetApp in the event of an error and allows NetApp to perform proactive and preemptive support for your systems.

  • Note: If you receive an error like the below, this is a known bug with version 3.1 of the toolkit.

    cluster_config_1

    The bug is a result of a type conversion error. The workaround is to simply provide a value for the MinimumPrivateDataLength parameter:

    # May fail due to bug in NPTK v3.1 and below:
    Get-NcNode $nodeName | Set-NcAutoSupportConfig -To "you@work.com"
     
    # succeeds, with workaround.  2 is the default value.
    Get-NcNode $nodeName | Set-NcAutoSupportConfig -To "you@work.com" -MinimumPrivateDataLength 2

Some common tasks for AutoSupport include:

# get the current autosupport configuration
Get-NcNode $nodeName | Get-NcAutoSupportConfig
 
# set the most common parameters
$splat= @{
  'Transport' = "https";
  'IsPrivateDataRemoved' = $true;
  'IsLocalCollectionEnabled' = $true;
  'IsEnabled' = $true;
  'IsSupportEnabled' = $true;
}
 
Get-NcNode $nodeName | Set-NcAutoSupportConfig @splat
 
# if you use a proxy for web access, setting the configuration is quite easy
Get-NcNode $nodeName | Set-NcAutoSupportConfig -ProxyUrl "$username:$password@$proxyhost:$port"

Licenses
Adding licenses to your cDOT system is trivial:

Add-NcLicense -License ABCDEFGHIJKLMNOP

You can quickly compare the licenses in your cluster using this one-liner:

Get-NcLicense | Group-Object -Property Owner | Sort-Object -Property Name

cluster_config_2

This gives an easy to check comparison if you have the same license count applied to each host. Alternatively, you could use this one-liner to view the licenses and which host they have been applied to:

Get-NcLicense | Select-Object Owner, Description | Group-Object -Property Description

cluster_config_3

Cluster Management LIF(s)
Each cluster must have at least one LIF which is used for managing the cluster itself.

# view the cluster management LIF(s)
Get-NcNetInterface | ?{ $_.Role -eq "cluster_mgmt" }
 
# managing the cluster management LIF is just like any other interface
# change the IP address
Get-NcNetInterface cluster_mgmt | Set-NcNetInterface -Address $newIP -Netmask $Netmask
 
# change the home node and port
Set-NcNetInterface -vserver $clusterName -Name cluster_mgmt -Node $newNode -Port $newPort

Moving the cluster management LIF ahead of maintenance operations on the hosting node is a good idea to avoid any potential issues with connectivity. This function will move the cluster management LIF to another host in the failover group:

function Move-LifInFog {
    [cmdletbinding(SupportsShouldProcess=$true)]
    Param(
        [Parameter(
            Mandatory=$true,
            ValueFromPipeline=$true
        )]
        [DataONTAP.C.Types.Net.NetInterfaceInfo]
        $LIF
    )
    process {
        # determine the new destination port
        $newPort = Get-NcNetFailoverGroup | ?{ 
                $_.FailoverGroup -eq $LIF.FailoverGroup `
                    -and $_.Node -ne $LIF.CurrentNode
            } | Get-Random
 
        $message = "Moving LIF to $($newPort.Node):$($newPort.Port)"
 
        if ($PSCmdlet.ShouldProcess($LIF, $message)) {
            $LIF | Move-NcNetInterface -DestinationNode $newPort.Node -DestinationPort $newPort.Port
        }
    }
}

cluster_config_4

Inter-Cluster LIF(s)
Inter-cluster LIFs are used for SnapMirror and SnapVault realtionships. They are a standard network interface with a specific role and firewall policy assigned.

# view ICLs for the cluster
Get-NcNetInterface -Role intercluster
 
# view ICLs for a specific node
Get-NcNode $nodeName | Get-NcNetInterface -Role intercluster
 
# create a new ICL
New-NcNetInterface -Name "$($nodeName)_ICL" -Node $nodeName -Role intercluster -Port $portName -DataProtocols none -Address $ip -Netmask $netmask
 
# change the home port of an ICL (remember ICLs have to stay local to the node)
Get-NcNetInterface -Name "$($nodeName)_ICL" | Set-NcNetInterface -Node $nodeName -port $newPortName
 
# to actually move the ICL to a new port, you need to use the Move-NcNetInterface cmdlet
Move-NcNetInterface -Name "$($nodeName)_ICL" -DestinationNode $nodeName -DestinationPort $newPort
 
# alternatively, if you changed the home port, just send it home
Invoke-NcNetInterfaceRevert -Name "$($nodeName)_ICL"

SNMP
Using just a couple of commands, we can configure and enable SNMP for the cluster.

// Create the community
Add-NcSnmpCommunity -Community notPublic
 
// Start the SNMP service
Enable-NcSnmp

DNS
DNS is managed much like any other service on the Cluster.

# show the DNS config for a SVM
Get-NcNetDns -Vserver $svmName
 
# modify the DNS configuration for a SVM
Get-NcNetDns -Vserver $svmName | Set-NcNetDns -NameServers 1.1.1.1,2.2.2.2,3.3.3.3
 
# an example from the documentation...copying the DNS configuration of one
# SVM to another in one simple step
Get-NcNetDns -Vserver $oldSVM | Set-NcNetDns -VserverContext $newSVM

The post NetApp PowerShell Toolkit 101: Cluster Configuration appeared first on The Practical Administrator.

NetApp PowerShell Toolkit 101: Node Configuration

$
0
0

In the last post we looked at some settings that apply to the cluster. This time, let’s look at how to administer nodes.

In this post we will cover using the NetApp PowerShell Toolkit to manage these aspects of nodes:

  • Network Port Configuration
  • Node Management LIFs
  • Service Processor
  • CDP
  • Aggregates

Network Port Configuration

Clustered Data ONTAP has two types of network configuration: ports, which are the physical aspects of the network connectivity, and logicial interfaces (LIFs), which are the logicial entities that receive IP address (or WWPN) assignments.

A port can reference a single port on the controller, such as e0a (for an ethernet port) or 0d (for a FC port). Ports can also reference interface groups, for example when creating an LACP link aggregate, or VLAN ports, which are added onto the physical ports.

node_mgmt_1

Note: This image comes from the NetApp document “Clustered Data ONTAP Network Management Guide”

Getting information about ports is just a cmdlet away…

# get the ports for a specific node
Get-NcNode $nodeName | Get-NcNetPort
 
# get the non-cluster ports for the node
Get-NcNode $nodeName | Get-NcNetPort -Role !cluster
 
# get ports that have an active link
Get-NcNode $nodeName | Get-NcNetPort | ?{ $_.LinkStatus -eq "up" }

Let’s look at how we can automate creating some different port configurations.

Port groups provide the ability to aggregate links and provide high availability in the event of link or switch failure. Creating them is a single cmdlet:

# create a new single-mode interface group.  singlemode is a failover-only group.
New-NcNetPortIfgrp -Node $nodeName -Name a0a -Mode singlemode -DistributionFunction mac
 
# create a new static multi-mode interface group.  static multimode is the same
# as an always on Cisco port channel.
New-NcNetPortIfgrp -Node $nodeName -Name a1a -Mode multimode -DistributionFunction ip
 
# create a new dynamic multi-mode interface group. dynamic multimode is an LACP aggregate.
New-NcNetPortIfgrp -Node $nodeName -Name a2a -Mode multimode_lacp -DistributionFunction mac

Regardless of the type of interface group created, you will need to add ports before it actually works.

Add-NcNetPortIfgrpPort -Name a2a -Node $nodeName -Port e0a,e0b

The final step is to create any VLAN interfaces. These are tagged VLAN interfaces and can be created on individual ports or interface groups.

# create a VLAN port
New-NcNetPortVlan -Node $nodeName -ParentInterface $portName -VlanId $VLAN

Setting port configuration is equally important and can be managed using the Set-NcNetPort cmdlet.

# enable jumbo frames on an interface
Set-NcNetPort -Node $nodeName -Name a0a -Mtu 9000
 
# disable flow control for 10GbE interfaces
Get-NcNetPort | ?{ 
        $_.PortType -eq "physical" -and $_.OperationalSpeed -eq 10000 
    } | Set-NcNetPort -FlowControl none

Node Management LIF(s)

Node management LIFs are managed the same as any other LIF, however they have a specific role: node_mgmt. Additionally, node management LIFs can not be migrated off the node they are meant to manage.

# get all node management LIFs
Get-NcNetInterface -Role node_mgmt
 
# get a specific node's management LIF
Get-NcNetInterface -Vserver $nodeName -Role node_mgmt

Service Processor

# get the current network configuration of an SP
Get-NcNode $nodeName | Get-NcServiceProcessorNetwork -AddressType ipv4
 
# configure the service processor
$splat = @{
    Node = $nodeName
    AddressType = "ipv4"
    Address = $ipAddress
    Netmask = $netmask
    GatewayAddress = $gateway
}
 
Set-NcServiceProcessorNetwork @splat -Enable

CDP

Cisco Discovery Protocol (CDP) is extremely helpful for verifying that you have connected your NetApp’s physical network ports to the correct ports on the switch. It also enables the network admins to verify configuration from their end as well.

There is no cmdlet for enabling or disabling CDP on the nodes, so instead we use system-cli API calls and the Invoke-NcSystemApi cmdlet. Here is a convenient wrapper function:

function Set-NcNodeCdp {
    [CmdletBinding(SupportsShouldProcess=$true)]
    param(
        [Parameter(
            Mandatory=$true,
            ValueFromPipeline=$true,
            ValueFromPipelineByPropertyName=$true
        )]
        [System.String]
        $Node,
 
        [Parameter(
            Mandatory=$true
        )]
        [Switch]$Enabled
 
    )
    process {
        if ($Node.GetType().FullName -ne "System.String") {
            $NodeName = $Node.Node
        } else {
            $NodeName = $Node
        }
 
        if ($Enabled) {
            $status = "on"
        } else {
            $status = "off"
        }
 
        $zapi  = "<system-cli><args>"
        $zapi +=   "<arg>node</arg>"
        $zapi +=   "<arg>run</arg>"
        $zapi +=   "<arg>-node $($NodeName)</arg>"
        $zapi +=   "<arg>options</arg>"
        $zapi +=   "<arg>cdpd.enable</arg>"
        $zapi +=   "<arg>$($status)</arg>"
        $zapi += "</args></system-cli>"
 
        $execute = Invoke-NcSystemApi -Request $zapi
 
        $result = "" | Select-Object Node,CDP
        $result.Node = $NodeName
 
        if ($execute.results.'cli-result-value' -eq "1") {
            $result.CDP = $status
        } else {
            Write-Warning $execute.results.'cli-output'
        }
 
        $result
 
    }
}

With the above function we can now enable and disable CDP easily.

# enable for a specific node
Set-NcNodeCdp -Node $nodeName -Enabled
 
# enable for all nodes
Get-NcNode | Set-NcNodeCdp -Enabled
 
# disable for all nodes
Get-NcNode | Set-NcNodeCdp -Enabled:$false

And with CDP enabled, we can get CDP information using a cmdlet which is part of the toolkit.

# get discovered ports
Get-NcNode $nodeName | Get-NcNetDeviceDiscovery | Format-Table -AutoSize

Aggregates

Aggregates are the foundation of data storage in Data ONTAP. Without them you can’t create volumes, and without volumes you can’t store data. Let’s look at some common tasks:

# show all aggregates
Get-NcAggr
 
# show SATA aggregates.  I bet you thought this would be a Get-NcAggr command...
Get-NcDisk | ?{ 
        $_.DiskInventoryInfo.DiskType -match "SATA|BSAS" -and $_.Aggregate -ne $null 
    } | Group-Object -Property Aggregate
 
# show Flash Pool aggregates
Get-NcAggr | ?{ $_.AggrRaidAttributes.AggregateType -eq "hybrid" }
 
# create an aggreagate
$splat = @{
    'Name' = $aggrName;
    'Node' = $nodeName;
    'DiskCount' = $diskCount;
    'RaidSize' = 16;
    'RaidType' = "raid_dp";
}
 
New-NcAggr @splat
 
# add disks to an aggregate
Add-NcAggr $aggrName -DiskCount $diskCount
 
# enable free space reallocation
Get-NcAggr $aggrName | Set-NcAggrOption -Key free_space_realloc -Value on

I prefer to have my root aggregate names end with “_root” to make them easily identifiable. Here is a short script that will automatically rename them for you:

# get each of the nodes
Get-NcNode | %{ 
    $nodeName = $_.Node
 
    # determine the current root aggregate name
    $currentAggrName = (
        Get-NcAggr | ?{ 
             $_.AggrOwnershipAttributes.HomeName -eq $nodeName `
               -and $_.AggrRaidAttributes.HasLocalRoot -eq $true 
        }).Name
 
    # no dashes
    $newAggrName = $nodeName -replace "-", "_"
 
    # can't start with numbers
    $newAggrName = $newAggrName -replace "^\d+", ""
 
    # append the root identifier
    $newAggrName = "$($newAggrName)_root"
 
    if ($currentAggrName -ne $newAggrName) {
        Rename-NcAggr -Controller $Cluster -Name $currentAggrName -NewName 
    }
}

The post NetApp PowerShell Toolkit 101: Node Configuration appeared first on The Practical Administrator.

NetApp PowerShell Toolkit 101: Storage Virtual Machine Configuration

$
0
0

Storage Virtual Machines (SVM) are the entity in clustered Data ONTAP which the storage consumer actually interacts with. As the name implies, they are a virtual entity, however they are not a virtual machine like you would expect. There are no CPU, RAM, or other cache assignments that must be made. Instead, we assign storage resources to the SVM, such as aggregates and data LIF(s), which the SVM then uses to provision FlexVols and make them available via the desired protocol.

In this post we will look at how to configure an SVM using PowerShell.

  • Create an SVM
  • Aggregate Access
  • SVM DNS Service
  • Configuring Data LIF(s)
  • Configuring Protocols

Create an SVM

# create a new SVM
$splat = @{
    # a name for the SVM
    "Name" = $svmName;
 
    # the name of the root volume, easy to keep track of
    # by using the SVM name in the name of the root vol
    "RootVolume" = "$($svmName)_root";
 
    # the aggregate to create the root volume on
    "RootVolumeAggregate" = $rootAggrName;
 
    # the NSS setting, use "file" if unsure
    "NameServerSwitch" = "file";
 
    # will vary based on how you're accessing the volumes
    # unix = NFS, iSCSI, and/or FC/FCoE
    # ntfs = CIFS/SMB
    # mixed = all of the above
    "RootVolumeSecurityStyle" = "unix";
 
    # language, C.UTF-8 is a good default if unsure
    "Language" = "C.UTF-8";
}
 
New-NcVserver @splat

Destroying SVMs can be a complex task, as all of the resources it uses must be removed first. Vidad Cosonock has created a script here that will automate removing an SVM, I highly recommend using that to simplify removing the SVM.

Aggregate Access

Limiting the aggregates that the SVM has access to can be beneficial when in a multitenant environment so that you can dedicate disks to specific tasks/customers. However, it can also be useful regardless of your use of mutlitenancy by preventing volumes from being created on root aggregates.

# show assigned aggregates
(Get-NcVserver $svmName).AggrList

Managing aggregate access is done by modifying the SVM properties. We can wrap that into functions to make it even easier:

function Add-SvmAggrAccess {
    [CmdletBinding(SupportsShouldProcess=$true)]
    param(
        [parameter(
            Mandatory=$true,
            ValueFromPipeline=$true,
            ValueFromPipelineByPropertyName=$true
        )]
        [System.String]$Vserver
        ,
 
        [parameter(
            Mandatory=$true,
            ValueFromPipeline=$true,
            ValueFromPipelineByPropertyName=$true
        )]
        [Alias('Name')]
        [System.String[]]$Aggregate
    )
    process {
        # get the current aggr list
        $aggrList = (Get-NcVserver -Name $svmName).AggrList
 
        # add the new aggr to the list
        $aggrlist += $Aggregate
 
        if ($PSCmdlet.ShouldProcess($Vserver, "Adding aggregate $($Aggregate) to approved list")) {
            # update the assigned aggregate list
            Set-NcVserver -Name $Vserver -Aggregates $aggrList
        }
    }
}
 
function Remove-SvmAggrAccess {
    [CmdletBinding(SupportsShouldProcess=$true)]
    param(
        [parameter(
            Mandatory=$true,
            ValueFromPipeline=$true,
            ValueFromPipelineByPropertyName=$true
        )]
        [System.String]$Vserver
        ,
 
        [parameter(
            Mandatory=$true,
            ValueFromPipeline=$true,
            ValueFromPipelineByPropertyName=$true
        )]
        [Alias('Name')]
        [System.String[]]$Aggregate
    )
    process {
        # remove the aggr from the list of current aggrs
        $aggrlist = (Get-NcVserver -Name $svmName).AggrList | ?{ $_ -notin $Aggregate }
 
        if ($PSCmdlet.ShouldProcess(
                $Vserver, 
                "Removing aggregate $($Aggregate) from approved list"
        )) {
            # update the assigned aggregate list
            Set-NcVserver -Name $Vserver -Aggregates $aggrList
        }
    }
}

Using these functions it’s now quite easy to modify the aggregates that an SVM has permission to use:

# add an aggregate to the SVM
Get-NcVserver $svmName | Add-SvmAggrAccess $aggrName
 
# remove an aggregate from the SVM's access
Get-NcAggr $aggrName | Remove-SvmAggrAccess $svmName

Finally, let’s add only non-root aggregates to the SVM:

# get the root aggrs
$rootAggrs = Get-NcVol | 
      ?{ $_.VolumeStateAttributes.IsNodeRoot -eq $true } | 
      %{ $_.Aggregate }
 
# remove them from the access list
Remove-SvmAggrAccess -Name $svmName -Aggregate $rootAggrs
 
# get the non-root aggregates
$nonRootAggrs =  Get-NcAggr | ?{ $_.Name -notin $rootAggrs }).Name
 
# add them to the access list
Add-SvmAggrAccess -Name $svmName -Aggregate $nonRootAggrs

SVM DNS Service

# configure new DNS
Get-NcVserver $svmName | 
    New-NcNetDns -Domains foo.bar,your.company -NameServers 8.8.8.8,8.8.4.4
 
# modify DNS configuration
Get-NcVserver $svmName | Set-NcNetDns -Domains foo.bar -NameServers $ns1,$ns2

Configuring Data LIF(s)

Before we can enable data access protocols we need to have a way of accessing the data. NetApp uses logical network interfaces, known as LIFs, assigned to the SVM. Let’s look at creating LIFs for the different protocols:

  • NFS / CIFS / SMB
    # Splatting is a convenient way to keep parameters readable
    $splat = @{
        # I prefer a simple naming convention to quickly identify LIFs
        'Name' = "$($nodeName)_$($svmName)_FILE_$($instanceNum)";
     
        # where to create the LIF
        'Vserver' = $svmName;
        'Node' = $nodeName;
        'Port' = $portName;
     
        # the type of LIF
        'Role' = "data";
     
        # alternatively only one or the other can be provided
        'DataProtocols' = "nfs","cifs";
     
        # finally, the IP information
        'Address' = $ipAddress;
        'Netmask' = $subnetMask;
    }
     
    New-NcNetInterface @splat
  • iSCSI
    # for block based protocols we want to have one LIF per node so
    # that ALUA can work its magic.  if you're using cDOT 8.3 and Subnets
    # then creating the LIFs is quite easy
    Get-NcNode | Foreach-Object { 
        # create a LIF on each node
        $splat = @{
            # keep a nice and easy naming convention
            'Name' = "$($_.Node)_$($svmName)_ISCSI";
     
            # where to create the LIF
            'Vserver' = $svmName;
            'Node' = $_.Node;
            'Port' = $portName;
     
            # the type of LIF
            'Role' = "data";
            'DataProtocols' = "iscsi";
     
            # Using a subnet, ONTAP will allocate the IP address.
            # The gateway and subnet mask are provided when the
            # subnet is created, and the Broadcast Domain will
            # ensure that we can failover the LIF
            'Subnet' = $subnetName;
        }
     
        New-NcNetInterface @splat
    }
  • FC / FCoE
    # Like with iSCSI, we want to create an FCP / FCoE LIF
    # on each node in the cluster
    Get-NcNode | Foreach-Object { 
        # create a LIF on each node
        $splat = @{
            # keep a nice and easy naming convention
            'Name' = "$($_.Node)_$($svmName)_FC";
     
            # where to create the LIF
            'Vserver' = $svmName;
            'Node' = $_.Node;
            'Port' = $portName;
     
            # the type of LIF
            'Role' = "data";
     
            # the protocol
            'DataProtocols' = "fcp";
     
            # for fcp WWPNs will be automatically generated
        }
     
        New-NcNetInterface @splat
    }

Configuring Procotols

A newly created SVM will not have any protocols assigned to it. Adding and configuring the protocols is a few simple commands.

  • NFS
    # configuring the protocol is a bit different because there are
    # so many options we create an object and set the options there
    $nfsServiceConfig = Get-NcNfsService -Template
     
    # enable the nfs service
    $nfsServiceConfig.IsNfsAccessEnabled = $true
     
    # enable nfsv3
    $nfsServiceConfig.IsNfsv3Enabled = $true
     
    # disable nfsv2, v4.0, v4.1
    $nfsServiceConfig.IsNfsv2Enabled= $false
    $nfsServiceConfig.IsNfsv40Enabled = $false
    $nfsServiceConfig.IsNfsv41Enabled = $false
     
    # apply the config
    Get-NcVserver $svmName | Add-NcNfsService -Attributes $nfsServiceconfig
  • SMB / CIFS
    # create the CIFS/SMB server and join it to the domain
    Add-NcCifsServer -VserverContext $svmName `
        -Domain $domainName -AdminCredential (Get-Credential)
     
    # start CIFS/SMB server
    Get-NcVserver $svmName | Start-NcCifsServer
  • iSCSI
    # add the service to the SVM
    Get-NcVserver $svmName | Add-NcIscsiService
     
    # start the service
    Get-NcVerver $svmName | Enable-NcIscsi
  • FC / FCoE
    # add the service to the SVM
    Add-NcFcpService -VserverContext $svmName
     
    # start the service
    Get-NcVserver $svmName | Enable-NcFcp

The post NetApp PowerShell Toolkit 101: Storage Virtual Machine Configuration appeared first on The Practical Administrator.

NetApp PowerShell Toolkit 101: Managing Volumes

$
0
0

Volumes are the containers of data in a NetApp storage system. They are “stored” on aggregates, accessed via Storage Virtual Machines, and are the point-of-application for many of the features of Data ONTAP. Let’s look at what we can do with volumes leveraging the PowerShell Toolkit:

  • Creating, Deleting, and Re-sizing Volumes
  • Volume Features
    • Thin Provisioning
    • Deduplication
    • Compression
    • AutoGrow / AutoShrink
    • Fractional Reserve
    • Quality of Service
  • Volume Options
  • Snapshots
  • FlexClones
  • Volume Move

IMPORTANT! It is VERY IMPORTANT that you are mindful of the SVM context for most of these commands. If you have two volumes with the same name in different SVMs, and you don’t specify the SVM, the action will affect both. This is no different than the CLI, where you must provide the SVM to be modified.

I am not specifying the SVM with most of the commands against volumes for the sake of brevity, however please, please (please!) keep this in mind as you perform actions against your volumes.

For any action which affects volumes, there will be a parameter named “VserverContext”. This is the parameter that you will want to specify:

Get-NcVol -VserverContext "SomeDataSVM" -Name $volumeName

This command can now, safely, be pipelined into others which modify properties.

Creating, Deleting, and Resizing Volumes

Creating a volume is an easy enough operation and works very similarly to the CLI.

# create a new volume for a SVM
$splat = @{
    # a unique name for the volume
    'Name' = $volumeName;
 
    # the aggregate to create the volume on
    'Aggregate' = $aggrname;
 
    # the junction point for the volume. I find using the
    # volume name to be the easiest
    'JunctionPath' = "/$volumeName";
 
    # unix or ntfs, depending on the type of system 
    # accessing the data.  don't use mixed unless you are
    # in the process of transitioning
    'SecurityStyle' = 'unix';
 
    # for VMware volumes I do not enable automatic snaps,
    # instead relying on VSC's Backup and Recovery plugin
    'SnapshotPolicy' = 'none';
 
    # none = thin provision the volume
    # volume = thick provision the volume (default)
    'SpaceReserve' = 'none';
 
    # the amount of space reserved for snapshots
    'SnapshotReserve' = '0';
 
    # the size of the volume.  can be appended with "g"
    # for GB, "t" for TB, etc.
    'Size' = "$($volSize)g";
}
 
Get-NcVserver $svmName | New-NcVol @splat

We can also modify volume properties using the NPTK:

# Online, offline, or restrict a volume
Get-NcVol $volumeName | Set-NcVol -Online
Get-NcVol $volumeName | Set-NcVol -Offline
Get-NcVol $volumeName | Set-NcVol -Restricted
 
# modify the size of a volume to an absolute size. don't forget that this 
# will not affect LUN sizes, just the volume!
Get-NcVol $volumeName | Set-NcVolSize -NewSize 100g
 
# modify the size of a volume using a relative amount
Get-NcVol $volumeName | Set-NcVolSize -NewSize +10g
 
# modify the size of a volume using a relative size. this is particularly
# helpful when you know you want to keep a certain amount of free space,
# e.g. 20%, where you can just add a % of space
Get-NcVol $volumeName | Set-NcVolSize -NewSize +20%

If your volume is a SnapMirror source volume, remember that the destination will increase it’s size as well.

Volume Features

  • Thin Provisioning
    # enable thin provisioning
    Get-NcVol $volumeName | Set-NcVolOption -Key guarantee -Value none
     
    # disable thin provisioning
    Get-NcVol $volumeName | Set-NcVolOption -Key guarantee -Value volume
     
    # find thick provisioned volumes, enable thin provisioning
    Get-NcVol | ?{ ($_ | Get-NcVolOption -Hashtable).value.guarantee -ne "none" } |
      Set-NcVolOption -Key guarantee -Value none
  • Deduplication
    # get deduplication status
    Get-NcVol $volumeName | Get-NcSis
     
    # show space saved as a result of deduplication
    # this one is a bit odd, the Get-NcEfficiency cmdlet returns
    # an object for the "Returns" property, which contains the
    # information we want
    (Get-NcEfficiency $volumeName).Returns.Dedupe / 1gb
     
    # enable deduplication
    Get-NcVol $volumeName | Enable-NcSis | Start-NcSis
     
    # set deduplication schedule
    Get-NcVol $volumeName | Set-NcSis -Schedule "sun-sat@2"
     
    # scan entire volume, this is required if you enable deduplication
    # on a volume that has preexisting data in it
    Get-NcVol $volumeName | Start-NcSis -Scan
  • Compression
    # show status, this is a long command, but makes 
    # the output much easier to read
    Get-NcVol $volumeName | 
      Get-NcSis | 
      Select Path,@{N="Compression"; E={ $_.IsCompressionEnabled }}, `
        @{n="Inline Compression"; E={ $_.IsInlineCompressionEnabled }} | 
      Format-Table -AutoSize
     
    # enable post process 
    Get-NcVol $volumeName | Set-NcSis -Compression:$true
     
    # enable inline
    Get-NcVol $volumeName | Set-NcSis -InlineCompression:$true
     
    # disable compression
    Get-NcVol $volumeName | Set-NcSis -Compression:$false -InlineCompression:$false
  • AutoGrow / AutoShrink
    # get the current min/max size
    Get-NcVol $volumeName | Get-NcVolAutoSize
     
    # set the min/max size
    $splat = @{
        # the volume to modify
        "Name" = $volumeName;
     
        # grow_shrink will enable both actions
        "Mode" = 'grow_shrink';
     
        # set the minimum and maximum sizes
        "MinimumSize" = $minSize;
        "MaximumSize" = $maxSize;
    }
     
    Set-NcVolAutosize @splat
     
    # set the size at which the volume will grow/shrink
    Get-NcVol $volumeName | Set-NcVolAutosize -GrowThresholdPercent 97 `
      -ShrinkThresholdPercent 85
  • Fractional Reserve
    Fractional reserve is the amount of volume space that is reserved for LUN writes when a snapshot is taken. It is only applicable when the volume contains one or more LUNs. In recent versions of Data ONTAP there are only two available values: 0% and 100%.
    # view fractional reserve
    Get-NcVol $volumeName | Get-NcVolOption | ?{ $_.Name -eq "fractional_reserve" }
     
    # set fractional reserve, 0 = off, 100 = on
    Get-NcVol $volumeName | Set-NcVolOption -Key fractional_reserve -Value 100
  • Quality of Service
    QoS was added to clustered Data ONTAP 8.2 and is an extremely helpful feature for a couple of reasons. One of those is the obvious ability to limit the amount of IOPS or throughput that a volume can use. The other less obvious use is workload characterization. QoS collects many statistics about the monitored workload and can report information like IO size, R/W mix, and much more.
    # create a QoS policy
    Get-NcVserver $svmName | New-NcQosPolicyGroup -Name $qosPolicyName -MaxThroughput INF
     
    # modify limits, remember a QoS policy can be either IOPs or bits,
    # but cannot be both
    Set-NcQosPolicyGroup -Name $qosPolicyName -MaxThroughPut "1000IOPS"
    Set-NcQosPolicyGroup -Name $qosPolicyName -MaxThroughput "1gb/s"

    Assigning a QoS policy to a volume is not quite so graceful, so let’s create a couple of functions to make it easier:

    function Set-NcVolQosPolicyGroup {
        [CmdletBinding(SupportsShouldProcess=$true)]
        param(
            [parameter(
                Mandatory=$true,
                ValueFromPipeline=$true,
                ValueFromPipelineByPropertyName=$true
            )]
            [String]$Name
            ,
     
            [parameter(
                Mandatory=$true,
                ValueFromPipeline=$true,
                ValueFromPipelineByPropertyName=$true
            )]
            [String]$PolicyGroup
            ,
     
            [parameter(
                Mandatory=$false,
                ValueFromPipelineByPropertyName=$true
            )]
            [Alias("Vserver")]
            [String]$VserverContext = $null
     
        )
        process {
            # verify the volume
            $volume = Get-NcVol -Name $Name -VserverContext $VserverContext
     
            if (!$volume) {
                throw "Unable to find volume with name $($Name)"
            }
     
            # verify the QoS Policy Group
            $policy = Get-NcQosPolicyGroup -Name $PolicyGroup
     
            if (!$policy) {
                throw "Unable to find policy group with name $($PolicyGroup)"
            }
     
            # a query for the update action
            $query = Get-NcVol -Template
     
            # initialize the search for the volume we want
            Initialize-NcObjectProperty -Object $query -Name VolumeIdAttributes
     
            # specify we want to operate on the provided volume
            $query.VolumeIdAttributes.Name = $volume.Name
     
            # initialize the update template
            $attributes = Get-NcVol -Template
     
            # initialize the QoS attr property
            Initialize-NcObjectProperty -Object $attributes -Name VolumeQosAttributes
     
            $attributes.VolumeQosAttributes.PolicyGroupName = $PolicyGroup
     
            # update the volume
            if ($PSCmdlet.ShouldProcess(
                $volume, 
                "Attach policy group $($policy.PolicyGroup).")
            ) {
                Update-NcVol -Query $query -Attributes $attributes | Out-Null
            }
     
            $volume | Get-NcVolQosPolicyGroup
        }
    }
     
    function Remove-NcVolQosPolicyGroup {
        [CmdletBinding(SupportsShouldProcess=$true)]
        param(
            [parameter(
                Mandatory=$true,
                ValueFromPipelineByPropertyName=$true
            )]
            [String]$Name
            ,
     
            [parameter(
                Mandatory=$false,
                ValueFromPipelineByPropertyName=$true
            )]
            [Alias("Vserver")]
            [String]$VserverContext
        )
        process {
            # verify the volume
            $volume = Get-NcVol -Name $Name -VserverContext $VserverContext
     
            if (!$volume) {
                throw "Unable to find volume with name $($Name)"
            }
     
            # a query for the update action
            $query = Get-NcVol -Template
     
            # initialize the search for the volume we want
            Initialize-NcObjectProperty -Object $query -Name VolumeIdAttributes
     
            # specify we want to operate on the provided volume
            $query.VolumeIdAttributes.Name = $volume.Name
     
            # initialize the update template
            $attributes = Get-NcVol -Template
     
            # initialize the QoS attr property
            Initialize-NcObjectProperty -Object $attributes -Name VolumeQosAttributes
     
            $attributes.VolumeQosAttributes.PolicyGroupName = "none"
     
            # update the volume
            if ($PSCmdlet.ShouldProcess(
                $volume, 
                "Remove policy group.")
            ) {
                Update-NcVol -Query $query -Attributes $attributes | Out-Null
            }
     
            $volume | Get-NcVolQosPolicyGroup
        }
    }
     
    function Get-NcVolQosPolicyGroup {
        [CmdletBinding()]
        param(
            [parameter(
                Mandatory=$true,
                ValueFromPipelineByPropertyName=$true
            )]
            [String]$Name
            ,
     
            [parameter(
                Mandatory=$false,
                ValueFromPipelineByPropertyName=$true
            )]
            [Alias("Vserver")]
            [String]$VserverContext
        )
        process {
            Get-NcVol -Name $Name -VserverContext $VserverContext | Select-Object `
              Name,@{N="Policy Group Name"; E={ $_.VolumeQosAttributes.PolicyGroupName }}
     
        }
    }

    And now, with our functions, we can show, add, and remove QoS Policy Groups from volumes easily.

    # get the policy group for all volumes
    Get-NcVol | Get-NcVolQosPolicyGroup
     
    # remove a policy group for some volumes
    Get-NcVol $volume1,$volume2 | Remove-NcVolQosPolicyGroup
     
    # set the policy group for a volume
    Get-NcVol $volumeName | Set-NcVolQosPolicyGroup -PolicyGroup $policyGroup

Volume Options

Setting volume options allows you to customize the volume to particular applications and uses. Let’s look at showing, getting, and setting some options.

# get options for a volume
Get-NcVol $volumeName | Get-NcVolOption

All of the options you would normally modify at the command line can be manipulated using PowerShell.

vol_mgmt_1

For most NFS volumes which are serving VMware datastores, here are the settings I use:

$volume = Get-NcVol $volumeName
 
# convert and create unicode, to ensure that filenames and directories are 
# in a unified format
$volume | Set-NcVolOption -Key convert_ucode -Value on
$volume | Set-NcVolOption -Key create_ucode -Value on
 
# disable access time updates for files, which saves a few IOPS
$volume | Set-NcVolOption -Key no_atime_update -Value on
 
# do not automatically take snapshots (I rely on VSC B&R for this)
$volume | Set-NcVolOption -Key nosnap -Value on
 
# the .snapshot directory must be visible for VSC B&R to work correctly
$volume | Set-NcVolOption -Key nosnapdir -Value off

Snapshots

Snapshots are the core principle behind NetApp data protection technology. They are instant, have no performance penalty, and can be reverted to at any time quickly and easily.

# creating a snapshot
Get-NcVol $volumeName | New-NcSnapshot -Snapshot "An Example Snapshot"
 
# listing snapshots 
Get-NcVol $volumeName | Get-NcSnapshot
 
# reverting to a snapshot.  BE CAREFUL! if you revert a node root volume
# it will cause the node to reboot!
Get-NcVol $volumeName | Restore-NcSnapshotVolume -SnapName $snapName
 
# deleting a snapshot
Remove-NcSnapshot -Volume $volumeName -Snapshot $snapName
 
# delete all snaps for a volume
Get-NcVol $volumeName | Get-NcSnapshot | Remove-NcSnapshot

FlexClones

A FlexClone is a copy of a volume based on a snapshot that usually consumes no additional space, except for new and changed data. It is a writable instance of the data that is contained in the volume at the time of the snapshot. They are particularly useful for testing and development where you can snapshot the production data, FlexClone it to a writable volume, and then do test/dev with real, production data. Combining this with QoS creates the ability to do this without affecting performance for the production environment.

# create a FlexClone, this will create a new snapshot as the base
Get-NcVol $volumeName | New-NcVolClone -CloneVolume $clonedVolName
 
# create a FlexClone, using an existing snapshot as the base
Get-NcVol $volumeName | 
  New-NcVolClone -CloneVolume $clonedVolName -ParentSnapshot $snapName
 
# split a FlexClone
Get-NcVol $clonedVolName | Start-NcVolCloneSplit
 
# check the progress of a clone split
Get-NcVolCloneSplit -Name $clonedVolName

Splitting a FlexClone is an operation that happens when you want to separate the cloned instance from the original. It is useful when you want to create a full copy of the data for any reason.

Remember that you can not delete the base snapshot of a volume with a FlexClone child until either the clone has been split, or the clone has been destroyed (it is managed just like a regular volume).

Volume Moves

Volume moves are a feature of clustered Data ONTAP (they aren’t available in 7-mode), and are a feature that fundamentally changes how storage administration is done. Using volume move operations it’s possible to vacate controllers, disk shelves, and aggregates to allow non-disruptive maintenance and lifecycle operations.

# check with aggrs a volume can move to
Get-NcVol $volumeName | Get-NcVolMoveTargetAggr
 
# initialize a volume move
Get-NcVol $volumeName | Start-NcVolMove -DestinationAggregate $newAggrName
 
# checking the status of the move
Get-NcVolMove $volumeName
 
# stopping / canceling a move
Get-NcVol $volumeName | Stop-NcVolMove

The post NetApp PowerShell Toolkit 101: Managing Volumes appeared first on The Practical Administrator.

NetApp PowerShell Toolkit 101: Managing Data Access

$
0
0

Over the last several posts we have reviewed how to create and manage aggregates, SVMs, and volumes. All of that is great, but at this point you still can’t access that capacity to begin storing things. In this post we will discuss the various ways to access the volumes and the data inside them.

  • Junctioning
  • Export Policies
  • NFS Exports
  • CIFS/SMB Shares
  • LUNs
    • LUN Management
    • iGroups
    • LUN Mapping

Junctioning

A junction is the path which the volume is accessed by. Exports and CIFS/SMB shares are both “mounted” to the root of the storage virtual machine (SVM) using the junction path. That junction path is then used by storage consumers to access the volume and read/write data to it.

Let’s look at an example. If you have a volume, “volume1”, you can junction it however you like: “/volume1” would mean that, for NFS, the mount would be myNetApp.domain.com:/volume1, or for CIFS/SMB, it would be \\myNetApp.domain.com\volume1. If you had a second volume, creatively named “volume2”, you could junction it at the root as well (e.g. “/volume2”), or you could nest it under volume1, e.g. “/volume1/volume2”.

Additionally, you can name the junction whatever you want. The name of the volume and the junction name are completely separate entities and are not required to match.

# list junctions
Get-NcVol | Select Vserver,Name,JunctionPath
 
# junction a volume
Get-NcVol $volumeName | Mount-NcVol -Junctionpath $newPath
 
# unjunction a volume
Get-NcVol $volumeName | Dismount-NcVol
 
# change volume junction
Get-NcVol $volumeName | Dismount-NcVol | Mount-NcVol -JunctionPath $newPath

Export Policies

An export policy, despite the name, applies to both NFS exports and CIFS/SMB shares. The export policy is what determines the permissions for accessing the junction. Remember that these are specific to each SVM.

# list export policy assigned to volumes
Get-NcVol | Select Vserver,Name,@{N="Export Policy"; E={ $_.VolumeExportAttributes.Policy }}
 
# list rules for an export policy
Get-NcVserver $svmName | Get-NcExportPolicy $policyName | Get-NcExportRule
 
# create an export policy
Get-NcVserver $svmName | New-NcExportPolicy $policyName

We will discuss policy rules below and address them for each of the access protocols.

NFS Exports

NFS access is managed using export policy rules. Make sure that the NFS server has been started and the NFS version you want to use has been configured.

For VMware volumes, you will want to use “sys” or “all” for the RO, RW, and SU security flavors. For maximum security, create a new rule for each of the hosts which will be connecting to the export and set the client match rule to the ESXi host IP address. If you are using a private network for NFS traffic, using the subnet for that VLAN is also a safe bet.

# create an export policy rule for NFS access
$splat = @{
    # e.g., "nfs", "nfsv3", "nfsv4".  Can be more than one
    # using a list of comma separated values.
    "Protocol" = $protocol;
 
    # examples: 192.168.0.0/24, 0.0.0.0/0, etc.
    # it is, generally, a good idea to set this to be as restrictive
    # as is reasonable for security reasons.
    "ClientMatch" = $subnetRule;
 
    # any, none, krb5, ntlm, and sys are all valid values
    "ReadOnlySecurityFlavor" = $roRule;
 
    # same valid values as the Read-Only rule
    "ReadWriteSecurityFlavor" = $rwRule;
 
    # same valid values as the Read-Only rule
    "SuperUserSecurityFlavor" = $rootRule;
}
 
Get-NcVserver $svmName | Get-NcExportPolicy $policyName | New-NcExportRule @splat
 
# remove a rule
Get-NcVserver $svmName | Get-NcExportPolicy $policyName | Remove-NcExportRule -Index 2
 
# edit a rule
Get-NcVserver $svmName | Get-NcExportPolicy $policyName | 
  Get-NcExportRule -Index 1 | Set-NcExportRule -ReadOnlySecurityFlavor "any"

CIFS/SMB Shares

CIFS/SMB shares provide Windows clients access to data. Make sure that you have enabled the CIFS server and are joined to an Active Directory domain for authentication/authorization services. Shares are created/destroyed using the Add-NcCifsShare and Remove-NcCifsShare cmdlets. Export policies are optional for CIFS/SMB as of cDOT 8.2.

# view the CIFS/SMB server for an SVM
Get-NcVserver $svmName | Get-NcCifsServer
 
# disable SMB2 for a SVM
Get-NcVserver $svmName | Get-NcCifsServer | Set-NcCifsOption -DisableSmb2
 
# enable SMB2 and SMB3
Get-NcVserver $svmName | Get-NcCifsServer | Set-NcCifsOption -EnableSmb2 -IsSmb3Enabled:$true
 
# create a share
Get-NcVserver $svmName | Add-NcCifsShare -Name $shareName -Path $junctionPath
 
# create an export policy rule for CIFS/SMB access
$splat = @{
    # just like NFS, except we change the protocol here
    "Protocol" = "cifs";
 
    # examples: 192.168.0.0/24, 0.0.0.0/0, etc.
    # it is, generally, a good idea to set this to be as restrictive
    # as is reasonable for security reasons.
    "ClientMatch" = $subnetRule;
 
    # any, none, krb5, ntlm, and sys are all valid values
    # any and ntlm make the most sense for a CIFS/SMB share
    "ReadOnlySecurityFlavor" = $roRule;
 
    # same valid values as the Read-Only rule
    "ReadWriteSecurityFlavor" = $rwRule;
 
    # same valid values as the Read-Only rule
    "SuperUserSecurityFlavor" = $rootRule;
}
 
Get-NcVserver $svmName | Get-NcExportPolicy $policyName | New-NcExportRule @splat

My personal recommendation is to not use export policy rules to limit access to a share. NTFS permissions are a perfectly acceptable method of managing access to data. Plus, as a storage administrator, do you really want to be managing share permissions for the Windows admins?

LUNs

LUNs are the method of access for all block based protocols (FC, FCoE, iSCSI). They are created the same, however they are mapped to initiators slightly differently. Let’s look at creating a LUN, then we’ll look at iGroups, and finally mapping the LUNs.

  • LUN Management
    # create a LUN
    $splat = @{
        # the LUN path still starts with "/vol"
        'Path' = "/vol/volumeName/lunName";
     
        # the size. you can use k, m, g, t to help
        'Size' = "10g";
     
        # standard os types...e.g. vmware, linux, windows_2008, etc.
        'OsType' = "vmware";
     
        # set the LUN to be thin provisioned
        'Unreserved' = $true;
     
       # cDOT 8.3 only, this enables several primitives such as UNMAP
       # and out-of-space notifications 
       'ThinProvisioningSupportEnabled' = $true;
    }
     
    Get-NcVserver $svmName | New-NcLun @splat
     
    # move a LUN
    Get-NcVserver $svmName | Start-NcLunMove -Source $sourcePath -Destination $destinationPath
     
    # check LUN move progress
    Get-NcLunMove
     
    # get LUN details
    Get-NcLun -Path $lunPath | Format-List *
  • iGroups
    # list iGroups
    Get-NcIgroup
     
    # get iGroups for an initiator
    Get-NcIgroup | Where-Object {
        $_.Initiators.InitiatorName -contains $iqnOrWwpn
    } | Select-Object Name,Type,Protocol
     
    # create an iGroup
    $splat = @{
        'Name' = "MySpecialiGroup";
     
        # the protocol: iscsi, fcp, or mixed
        'Protocol' = "iscsi";
     
        # the OS of the clients, e.g. windows, linux, vmware
        'Type' = "vmware";
    }
     
    Get-NcVserver $svmName | New-NcIgroup @splat
     
    # add initiators to an iGroup
    "iqn.1998-01.com.vmware:host1","iqn.1998-01.com.vmware:host2",
    "iqn.1998-01.com.vmware:host3","iqn.1998-01.com.vmware:host4" | Foreach-Object {
        Add-NcIgroupInitiator -Name $igroupName -Initiator $_ -VserverContext $svmName
    }
  • LUN Mapping
    # get iGroup for LUN
    Get-NcLun -Path $lunPath | Get-NcLunMap
     
    # get LUNs mapped to an iGroup
    Get-NcLunMap | Where-Object { $_.InitiatorGroup -eq $igroupName }
     
    # get LUNs mapped to a host initiator
    Get-NcLunMapByInitiator -Initiator $hostInitiator
     
    # map a LUN to an iGroup
    Get-NcLun -Volume $volumeName | Add-NcLunMap -InitiatorGroup $igroupName

    Clustered Data ONTAP 8.3 will not show the LUN as accessible from all hosts by default. To add another host for LUN reporting (for example, when preparing to do a LUN move operation), you will need to explicitly add it to the map.

    # add a node reporting for a LUN
    Add-NcLunMapReportingNodes -Path $lunPath -InitiatorGroup $igroupName -Nodes node3,node4
     
    # add all nodes to the reporting
    Add-NcLunMapReportingNodes -Path $lunPath -InitiatorGroup $igroupName -All $true
     
    # remove a node for a LUN
    Remove-NcLunMapReportingNode -Path $lunPath -InitiatorGroup $igroupName -Nodes node1,node2

The post NetApp PowerShell Toolkit 101: Managing Data Access appeared first on The Practical Administrator.

NetApp PowerShell Toolkit 101: Data Protection

$
0
0

Protecting data is arguably the most important job that your storage is entrusted with. Losing data is simply not an option, so it’s critical to protect data through the use of backups and replication.

There are different ways that you can replicate data in your clustered Data ONTAP system. First, you can replicate to a separate volume of the same SVM in the cluster. Second, to a volume that belongs to a different SVM in the same cluster. Finally, replication can be configured with another cluster entirely.

In this post we will cover:

  • Peering Relationships
    • Cluster Peers
    • SVM Peers
  • SnapMirror Policies
  • SnapMirror
    • Version Flexible SnapMirror
  • SnapVault
  • Load Sharing Mirrors

If you are interested in additional detail about SnapMirror and SnapVault in clustered Data ONTAP 8.3, please see the post I did over at DatacenterDude.com.

Peering Relationships

The first steps for configuring replication relationships is to configure the cluster and SVM to peer with each other. This is only necessary when you are traversing the respective boundaries. For example, if you are SnapMirroring to a volume which belongs to the same SVM as the source, you do not need to configure peer relationships.

Note: You will need to configure at least one inter-cluster LIF (ICL) before you can replicate between clusters.

Cluster Peers
# show current peers
Get-NcClusterPeer
 
# create a peer
Add-NcClusterPeer -Address $peerIclAddress -Credential (Get-Credential)
 
# check peer connectivity for ICLs
Get-NcClusterPeer -Name $clusterPeerName | Get-NcClusterPeerHealth
 
# remove a peer
Get-NcClusterPeer -Name $clusterPeerName | Remove-NcClusterPeer
SVM Peers
# show current peers
Get-NcVserver -Name $svmName | Get-NcVserverPeer
 
# create a new SVM peer relationship
New-NcVserverPeer -Vserver $svmName -PeerCluster $remoteClusterName `
  -PeerVserver $remoteSvmName -Application snapmirror
 
# after submitting a peering request, you will need to confirm the
# relationship on the destination cluster
Confirm-NcVserverPeer -Vserver $localSvmName -PeerVserver $removeSvmName
 
# remove all peers for an SVM
Get-NcVserver -Name $svmName | Get-NcVserverPeer | Remove-NcVserverPeer

SnapMirror Policies

The SnapMirror policy, and it’s constituent rules, determine the type of protection relationship, which snapshots to transfer, and the number of snapshot copies to keep at the destination.

# view policy details
Get-NcSnapmirrorPolicy -Name $policyName
 
# create a policy
$splat = @{
    # a unique name for the policy
    'Name' = $newPolicyName;
 
    # whether or not to restart if interrupted
    'Restart' = 'always';
 
    # async_mirror = SnapMirror, vault = SnapVault
    # mirror_vault = both
    'Type' = 'async_mirror';
 
    # use, or not, network compression
    'EnableNetworkCompression' = $true;
}
 
New-NcSnapmirrorPolicy @splat
 
# enable network compression for an existing policy
Get-NcSnapmirrorPolicy -Name $policyName | 
  Set-NcSnapmirrorPolicy -EnableNetworkcCompression $true

SnapMirror

SnapMirror replicates a volume between source and destination, relying on SnapShots to determine what data has changed (or been added) and therefore needs to be replicated.

The life cycle of a SnapMirror relationship has many different phases, roughly in the following order:

  • Initialize – this is the initial data transfer
  • Update – resyncs the volumes, transferring any new and changed data
  • Quiesce – finishes current update, then disables future resync operations. Normally a precursor to ending the relationship.
  • Resume – resumes updating the mirror after it has been quiesced or broken.
  • Break – after being quiesced, this will make the destination volume writable.
  • Resync – after the reason for making the destination writable has gone away (testing? disaster?), resync the relationship to the current state at the source.

There are three types of SnapMirror relationship. The first one is data protection. This is what is commonly referred to as simply “SnapMirror” in the NetApp lexicon. It is a protection relationship which replicates data from the source volume to a read-only “DP” volume. This type of relationship is meant to provide disaster recovery protection by allowing the destination volume to be made writable if necessary.

Note that when creating this relationship type, the destination volume must have a type of dp.

SnapMirror is controlled from the destination, which means that all of the PowerShell commands should be targeted at the destination cluster.

# show protected volumes, note this will display both
# mirrored and vaulted volumes
Get-NcSnapmirror
 
# show mirrored volumes
Get-NcSnapmirror | Where-Object {
    $_.RelatiopnshipType -eq "data_protection"
}
 
# show unhealthy relationships (both mirror and vault)
Get-NcSnapmirror | Where-Object { $_.IsHealthy -eq $false }
 
# show lag time for relationships (both mirror and vault)
Get-NcSnapmirror | Select-Object SourceLocation,DestinationLocation,`
  @{ 'N'="Lag Time Hours"; 'E'={ [Math]::Round($_.LagTime / 60 / 60, 2) } }
 
# create a new SnapMirror relationship
$splat = @{
    # the cluster is optional
    "Source" =  "$srcCluster://$srcSvmName/$srcVolumeName";
 
    # cluster is again optional if you are not configuring
    # replication between two clusters
    "Destination" = "$dstCluster://$dstSvmName/$dstVolumeName";
 
    # dp = SnapMirror
    "Type" = "dp";
 
    # DPDefault = SnapMirror
    # or use a custom policy of your own with a type of async-mirror
    "Policy" = "DPDefault";
 
    # view the available policies using the 
    # Get-NcJobCronSchedule cmdlet
    "Schedule" = "daily"
}
 
New-NcSnapmirror @splat
 
# initialize the relationship
Invoke-NcSnapmirrorInitialize -Source "$srcCluster://$srcSvmName/$srcVolumeName" `
  -Destination "$dstCluster://$dstSvmName/$dstVolumeName"
 
# manually update a relationship
Invoke-NcSnapmirrorUpdate -Source "$srcCluster://$srcSvmName/$srcVolumeName" `
  -Destination "$dstCluster://$dstSvmName/$dstVolumeName"
 
# quiesce and break the relationship
Invoke-NcSnapmirrorQuiesce -Source "$srcCluster://$srcSvmName/$srcVolumeName" `
  -Destination "$dstCluster://$dstSvmName/$dstVolumeName"
Invoke-NcSnapmirrorBreak -Source "$srcCluster://$srcSvmName/$srcVolumeName" `
  -Destination "$dstCluster://$dstSvmName/$dstVolumeName"
 
# resync the relationship (note that his will cause any
# changes since the last SnapMirror snapshot on the 
# destination to be lost)
Invoke-NcSnapmirrorResync -Source "$srcCluster://$srcSvmName/$srcVolumeName" `
  -Destination "$dstCluster://$dstSvmName/$dstVolumeName"
 
# release a relationship.  this will remove 
# snapshots related to the relationship from the source
# and destination volumes.  after releasing, the relationship
# cannot be resumed with a reinitialization.
Invoke-NcSnapmirrorRelease -Source "$srcCluster://$srcSvmName/$srcVolumeName" `
  -Destination "$dstCluster://$dstSvmName/$dstVolumeName"

Version Flexible SnapMirror

Clustered Data ONTAP 8.3 introduces a new type of SnapMirror relationship, known as “Version Flexible”. Traditionally the SnapMirror destination must be at the same version of Data ONTAP, or higher. When using a version flexible relationship this limitation has been removed, however there are some differences when creating the relationship:

  • The relationship type must be xdp when using the CLI, or vault when using the PowerShell Toolkit
  • A version flexible SnapMirror policy must be used (default polices are DPDefault, MirrorAllSnapShots, MirrorLatest, and MirrorAndVault). This is a policy which has a type of async-mirror or mirror-vault, but not vault (that would make it a SnapVault relationship).
# create a version flexible SnapMirror
$splat = @{
    # the cluster is optional
    "Source" =  "$srcCluster://$srcSvmName/$srcVolumeName";
 
    # cluster is again optional if you are not configuring
    # replication between two clusters
    "Destination" = "$dstCluster://$dstSvmName/$dstVolumeName";
 
    # use vault when creating a version flexible mirror
    "Type" = "vault";
 
    # Make sure to use the correct policy type.  using "XDPDefault"
    # here will make this a snapvault relationship, which is not
    # the desired/intended result
    "Policy" = "MirrorLatest";
 
    # view the available policies using the 
    # Get-NcJobCronSchedule cmdlet
    "Schedule" = "weekly"
}
 
New-NcSnapmirror @splat
 
# show version flexible snapmirror relationships
Get-NcSnapmirror | Where-Object {
    $_.RelatiopnshipType -eq "extended_data_protection"
}

SnapVault

SnapVault relationships are meant to provide backup and archive functionality for data. Snapshot retention can be configured for longer, meaning a larger window of available data exists to recover from.

Creating a SnapVault relationship is 99% the same as a SnapMirror relationship. The difference when setting up the relationship is the type, which must be set to xdp for CLI or vault when using PowerShell, and the policy should be a vault type (default is XDPDefault).

# Show vaulted volumes
Get-NcSnapmirror | Where-Object {
    $_.RelationshipType -eq"vault"
}
 
# create a new SnapVault relationship
$splat = @{
    # the cluster is optional
    "Source" =  "$srcCluster://$srcSvmName/$srcVolumeName";
 
    # cluster is again optional if you are not configuring
    # replication between two clusters
    "Destination" = "$dstCluster://$dstSvmName/$dstVolumeName";
 
    # vault = SnapVault
    "Type" = "vault";
 
    # XDPDefault = SnapVault
    # or use a custom policy of your own with a type of "vault"
    "Policy" = "XDPDefault";
 
    # view the available policies using the 
    # Get-NcJobCronSchedule cmdlet
    "Schedule" = "hourly"
}
 
New-NcSnapmirror @splat

After creating the SnapVault relationship it can be managed using the same initialize, update, quiesce, and break commands as SnapMirror.

Load Sharing Mirror

Finally, we have a special type of relationship known as a load sharing mirror. This is most frequently used for SVM root volumes to ensure that they are accessible in the event of aggregate failure on the primary node. This is not a traditional SnapMirror and does not provide the same functionality, instead it is limited to mirroring data across nodes in a cluster to provide high availability and increased read performance for data that could be accessed from any node.

# create a load sharing relationship
New-NcSnapmirror -Source "//$svmName/$srcVolume" `
  -Destination "//$svmName/$dstVolume" -Type ls
 
# initialize the load sharing mirror
Invoke-NcSnapmirrorLsInitialize -Source "//$svmName/$srcVolume"
 
# update the load sharing mirror
Invoke-NcSnapmirrorLsUpdate -Source "//$svmName/$srcVolume"

The post NetApp PowerShell Toolkit 101: Data Protection appeared first on The Practical Administrator.

NetApp vRealize Integration Package for OnCommand WFA version 3.0.1

$
0
0

Not too long ago we released a new version of the vRealize Integration Package for OnCommand WFA v3.0, which had a significant number of improvements around speed, flexibility, and overall robustness.

vRealize Integration Package for OnCommand WFA workflow listing

The vRealize Integration Package is a series of vRO workflows which take advantage of Workflow Automation’s REST interface for executing workflows. Version 3 of the package has been almost completely rewritten to be faster and easier to use than previously. This was done by implementing functionality using scriptable tasks, storing the WFA REST host connection and referencing it, and improving the debug log output.

New with this version is the vRealize Orchestrator action getUserInputValues. This action relies on functionality found in WFA version 3.0+ where the REST API has the ability to return valid values for WFA inputs which are “query” fields. This action makes it easy to add dynamic, real-time population of workflow inputs based on the actual WFA data.

Unfortunately, there have been a number of obstacles that have conspired to make getting the package difficult. Primary among these was a policy change with the NetApp Communities which prevents submitting a compressed file to be hosted. The policy change was done for a good reason, and there’s nothing that can be done to reverse the decision, but we are diligently working to get the package availability resolved!

Additionally, a mistake was made during packaging, so the original version was missing an action which was needed for collecting workflow inputs to be sent to WFA.

With that in mind, there are two locations it is available now:

  • Me! Send me a message using the NetApp Communities private message system (asulliva), or send me an email (my communities username at netapp.com)
  • The Field Portal – Accessible to NetApp employees and partners using this link

For more information, be sure to see TR-4308: Software-Defined Storage with NetApp and VMware and TR-4306: Building Automation and Orchestration for Software-Defined Storage with NetApp and VMware.

If you have any questions, problems, feature requests, bug reports, or need for help of any kind, please don’t hesitate to contact me via the comments below, using the NetApp Communities private message system (my username is asulliva), or via email (my communities username at netapp.com).

The post NetApp vRealize Integration Package for OnCommand WFA version 3.0.1 appeared first on The Practical Administrator.


NetApp NFS Mount Access Denied By Server

$
0
0

Just a quick tip today. While setting up a lab I had the need to mount a cDOT (8.3.0) export from behind a NAT gateway. When attempting the mount operation I got a relatively unhelpful error:

mount.nfs: access denied by server while mounting nfs.server.name:/mount/path

After some digging, I found that the cause of this is a setting on the storage virtual machine (a.k.a. SVM, formerly vserver). The problem is that by default cDOT expects that a port <= 1024 will be used for the mount operation. When NAT happens between you and the export, you are at the mercy of the gateway device for the port to be used. By setting the SVM NFS option mount-rootonly to disabled, this requirement is lifted.

To fix the problem from the cluster shell:

vserver nfs modify -vserver svmName -mount-rootonly disabled

To fix the problem using the NetApp PowerShell toolkit:

# create the config parameter template
$serviceConfig = Get-NcNfsService -Template
$serviceConfig.IsMountRootonlyEnabled = $false
 
# update the SVM configuration
Get-NcVserver $svmName | Set-NcNfsService -Attributes $serviceConfig

The post NetApp NFS Mount Access Denied By Server appeared first on The Practical Administrator.

Populate vRO workflow inputs from WFA 3.0 using REST

$
0
0

One of the challenges when using the NetApp vRO Package for WFA has been making the inputs dynamically populate with the correct information from WFA. If you add a new volume, aggregate, SVM, or other entity to your NetApp, you want it to show up in your workflows to be able to take advantage of it. There are workarounds, such as using the database or creating a filter/finder to retrieve the information, but each of those was flawed, primarily because they would not be updated to use new query parameters automatically if the WFA admin updated the workflow. Fortunately, WFA 3.0 has fixed this by adding a new REST method to the workflows namespace which will return the valid values back for you.

New WFA REST method for retrieving workflow input values

To take advantage of this we need a helper action to retrieve the values from WFA, and another helper to provide the dependent inputs and extract the information we want from the response and put it into the correct format for a vRO input.

Querying WFA User Input Values Using REST

The first thing we need to do is create a helper action to abstract the query action. This simply removes the complexity of making the REST call to a single action with a single set of inputs, making it easier to execute without having to recreate the wheel each time. If you are using the NetApp vRO Package for WFA version 3.0 or above, this action should already be available to you. If not, I recommend that you download the package now!

There are three major sections to the action:

  • Accept dependent inputs and make them ready for the GET operation
  • Query WFA for the valid values
  • Parse the result and put it into a more usable format

If you’re interested in the code, be sure to download the NetApp vRO Package for WFA and look at the action at com.netapp.oncommand.wfa -> getUserInputValues.

Integrating the values with your workflows

Now that we have the ability to easily query for valid values, let’s see how we put it to work. This will be similar to the previous post where we used a finder to populate the values in a workflow.

A workflow input, for WFA or vRO, can have zero or more input values which contribute to determining the valid values. For example, you can’t get a list of NetApp volumes without first knowing the cluster and storage virtual machine at a minimum, and sometimes you want/need the aggregate as well. This can be highly dependent on the WFA workflow as well, sometimes you’ll need all three of those, sometimes only one or two. To make things even more complicated, WFA is case sensitive, which means that the workflow name, input name, and contributing values all must match cases exactly.

Creating an Action to query WFA
Let’s use our trusty Create a Clustered Data ONTAP Volume workflow from before. Recall that it has four inputs:

  • ClusterName
  • VserverName
  • VolumeName
  • VolumeSizeInGB

We need to create a custom vRO action for each of the vRO inputs we want populated with values from WFA. We need to create these to account for the variances in the names of the input values as well as the return data we want to retrieve. Here is the code of the action to retrieve the ClusterName. Note that this action has a single input “workflowName”, which is the same as is used by the parent vRO workflow, and is not shown here.

# this is the name of the WFA input we are querying for
var inputName = "ClusterName";
 
# if there are any dependent parameters, they would be provided here
var dependentParameters = new Properties;
 
# execute the action referenced above, which abstracts querying WFA
# for the valid input values
var query = System.getModule("com.netapp.oncommand.wfa").getUserInputValues(workflowName, inputName, dependentParameters);
 
# a new variable to store the values returned by the action
var ret = new Array();
 
# iterate over the returned values, looking for the column name we
# want.  when found, add it to the array of return values
for(var i = 0 ; i < query.length ; i++) {
    # this value, "Name", is determined by the WFA SQL query
    ret[i] = query[i].get("Name");
}
 
# exit the action, returning the values
return ret;

The name of the column in the return data that’s used for populating the array values is determined by the SQL query in WFA. If that sounds confusing, well, it is. To determine what you should use here you can either use the WFA GUI or use the REST interface manually.

To use the GUI, edit the workflow, then click the “Setup” button in the upper left corner.

View WFA workflow setup

Once the setup details popup opens, browse to the “User Inputs” tab, then double click the input you’re interested in.

Edit WFA user input

In the new popup, there will be a link in the middle which will read “View or edit the SQL query executed at run time”. Click this link.

View query input SQL

This will open a new window with some SQL in it. You’ll notice in this example that Name is being used to refer to the name of the cluster.

WFA query input SQL

Populating vRO workflow inputs

Now that we have created an action which queries WFA for our values and returns an array of strings (which is used to populate a dropdown in a vRO workflow), we need to integrate it with the workflow’s inputs.

Start by editing the workflow, and browsing to the “Presentation” tab. Select the input, then, in the bottom pane, select the “Properties” tab.

Editing vRO workflow inputs

Edit the value by clicking the purple puzzle piece. Note that if your input doesn’t look like the above screen shot, you will need to change it to an OGNL input (change the box with a line on the left side to the two ended arrow). Search for the name of the action you created, mine is named WFAGetCluster. Select it, then edit the value for the needed input by clicking the pencil icon.

Edit vRO input values

If you followed from the original workflow creation, then you will have a workflow attribute named wfaWorkflowName. Select this as the value. Alternatively, you can use a text input and simply type the name of the source workflow (remember it’s case sensitive!).

At this point, click the OK button, then switch to the “Schema” view in the main workflow edit screen. Click the debug button and bask in your awesomeness!

showing a dynamically populated input

Taking it further

You can follow these same basic steps for any of the query inputs from your WFA workflow. If there are other inputs which the one you are querying for depends on, then you simply need to add them to the action’s inputs and populate the dependent inputs properties array. I have attached an example workflow and the dependent actions here as a reference.

If you have any questions, please don’t hesitate to reach out to me using the comments below, the NetApp Community private message system (my username is asulliva), or using email (my communities username at netapp.com).

The post Populate vRO workflow inputs from WFA 3.0 using REST appeared first on The Practical Administrator.

cDOT Performance Monitoring Using PowerShell

$
0
0

Performance monitoring is a complex topic, but it’s something that is vital to the successful implementation and maintenance of any system. In the past I’ve had several posts about using Perl for gathering performance statistics from a 7-mode system (using ONTAP 7.3.x, which is quite old at this point), so I thought it might be a good time for an update.

I originally documented some of this information in a response on the NetApp Community site. This post expands on that a bit and documents it externally.

The NetApp PowerShell Toolkit has three cmdlets which we can use to determine what objects, counters, and instances are available, and a fourth cmdlet to actually collect the data.

Finding the Right Performance Object

Performance reporting in the clustered Data ONTAP API is broken out by two things: Object and Counter. In order to monitor something, for example aggregate performance, we need to find the object which pertains to that “something”. We do this using the Get-NcPerfObject cmdlet.

Throughout the rest of this post I will be using the example of aggregate monitoring, specifically how many reads and writes are being done against an aggregate.

PS C:\> Get-NcPerfObject

Name                                               PrivilegeLevel
----                                               --------------
affinity                                           diag
affiperclass                                       diag
affiperqid                                         diag
affitotal                                          diag
aggregate                                          admin
...
...
...

For my cDOT 8.3 cluster this returned 358 items, which is a lot of different categories of monitoring! For many things we can help reduce the ones to consider by using the PrivilegeLevel. The most commonly monitored things are going to be at either admin or advanced privilege level, whereas diag is used for very detailed, infrequently needed, counters. To view non-diag objects, we change the command slightly.

Get-NcPerfObject | ?{ $_.PrivilegeLevel -ne "diag" }

PS C:\Users\Andrew> Get-NcPerfObject | ?{ $_.PrivilegeLevel -ne "diag" }

Name                                               PrivilegeLevel
----                                               --------------
aggregate                                          admin
audit_ng                                           admin
audit_ng:vserver                                   admin
cifs                                               admin
cifs:node                                          admin
cifs:vserver                                       admin
client                                             admin
client:vserver                                     admin
cluster_peer                                       admin
cpx                                                admin
cpx_op                                             advanced
disk                                               admin
disk:constituent                                   admin
disk:raid_group                                    admin
ext_cache                                          admin
ext_cache_obj                                      admin

This results in just 113 objects returned, a much shorter list to consider. This privilege level also indicates how much permission on the cluster the user collecting the information will need. A user with diag privileges is going to have considerably more permission on the cluster than one with only admin or advanced.

Finding the Counters

Now that we know what objects are available they give us a categorical view of what’s available. To find out what counters are being collected for each one we use the Get-NcPerfCounter cmdlet. Using the aggregate object as an example, we see the following:

PS C:\Users\Andrew> Get-NcPerfCounter -Name aggregate | ?{ $_.PrivilegeLevel -ne "diag" } | Select-Object Name,PrivilegeLevel,Unit,Properties,Desc | Format-Table

Name                  PrivilegeLevel Unit    Properties        Desc
----                  -------------- ----    ----------        ----
cp_read_blocks        admin          per_sec rate              Number of blocks read per second during a CP on the aggregate
cp_read_blocks_hdd    admin          per_sec rate              Number of blocks read per second during a CP on the aggregate HDD disks
cp_read_blocks_ssd    admin          per_sec rate              Number of blocks read per second during a CP on the aggregate SSD disks
cp_reads              admin          per_sec rate              Number of reads per second done during a CP to the aggregate
cp_reads_hdd          admin          per_sec rate              Number of reads per second done during a CP to the aggregate HDD disks
cp_reads_ssd          admin          per_sec rate              Number of reads per second done during a CP to the aggregate SSD disks
instance_name         admin          none    string            Name of the aggreagte instance
instance_uuid         admin          none    string            UUID for aggregate instance
node_name             admin          none    string            Node Name
node_uuid             admin          none    string,no-display System node id
total_transfers       admin          per_sec rate              Total number of transfers per second serviced by the aggregate
total_transfers_hdd   admin          per_sec rate              Total number of transfers per second serviced by the aggregate HDD disks
total_transfers_ssd   admin          per_sec rate              Total number of transfers per second serviced by the aggregate SSD disks
user_read_blocks      admin          per_sec rate              Number of blocks read per second on the aggregate
user_read_blocks_hdd  admin          per_sec rate              Number of blocks read per second on the aggregate HDD disks
user_read_blocks_ssd  admin          per_sec rate              Number of blocks read per second on the aggregate SSD disks
user_reads            admin          per_sec rate              Number of user reads per second to the aggregate
user_reads_hdd        admin          per_sec rate              Number of user reads per second to the aggregate HDD disks
user_reads_ssd        admin          per_sec rate              Number of user reads per second to the aggregate SSD disks
user_write_blocks     admin          per_sec rate              Number of blocks written per second to the aggregate
user_write_blocks_hdd admin          per_sec rate              Number of blocks written per second to the aggregate HDD disks
user_write_blocks_ssd admin          per_sec rate              Number of blocks written per second to the aggregate SSD disks
user_writes           admin          per_sec rate              Number of user writes per second to the aggregate
user_writes_hdd       admin          per_sec rate              Number of user writes per second to the aggregate HDD disks
user_writes_ssd       admin          per_sec rate              Number of user writes per second to the aggregate SSD disks

Notice that, once again, I removed the counters which are at the diag level. You may want to look at them, but for the most part they are things that only infrequently need to be monitored because they are very low level details.

I included the properties field because it’s very important…it tells us how to read the counter. From the API documentation:

  • raw: single counter value is used
  • delta: change in counter value between two samples is used
  • rate: delta divided by the time in seconds between samples is used
  • average: delta divided by the delta of a base counter is used
  • percent: 100*average is used

Looking at the descriptions, it appears that we want to look at the user_reads, user_writes, and total_transfers counters to determine how much activity is happening on our aggregate. Each of these is a rate counter, which means we need to measure it once, wait some known amount of time (e.g. 5 seconds), then measure again and divide by the number of seconds.

Instances of the Object

Now that we know the objects and counters, and we’ve determined what we want to monitor, we need to find the instances. To do that we use the Get-NcPerfInstance cmdlet.

PS C:\Users\Andrew> Get-NcPerfInstance -Name aggregate | Where-Object { $_.Name -notlike "*root" }

Name                   Uuid
----                   ----
VICE01_aggr1_sas       96f8b6c9-4444-11b2-be67-123478563412
VICE02_aggr1_sas       49f45938-45a8-11b2-9ea8-123478563412
VICE03_aggr1_sas       0b916a30-45a8-11b2-9a6d-123478563412
VICE04_aggr1_sas       6ee009b9-45a8-11b2-8bac-123478563412
VICE05_aggr1_sata      8dffa99a-45a8-11b2-839d-123478563412
VICE06_aggr1_sata      15c61be8-b5a6-4db1-b61a-8566bd967c32

I excluded root aggregates from this listing using the Where-Object snippet because I’m not interested in those at this time.

Reporting Performance

We now have everything needed to monitor performance: the object, the counters, and the instance. We use the Get-NcPerfData cmdlet to query for information.

Get-NcPerfData -Name aggregate -Instance VICE01_aggr1_sas -Counter user_reads,user_writes,total_transfers

Here is what it looks like in action:

PS C:\> (Get-NcPerfData -Name aggregate -Instance VICE01_aggr1_sas -Counter user_reads,user_writes,total_transfers).counters | Select-Object Name,Value

Name            Value
----            -----
total_transfers 10477200561
user_reads      10168492251
user_writes     157344312

Remember that these are rate counters. To determine the values, we simply measure at two intervals and divide…

# collect the first values
$one = (Get-NcPerfData -Name aggregate -Instance VICE01_aggr1_sas -Counter user_reads,user_writes,total_transfers).counters

# wait a few seconds
Start-Sleep -Seconds 5

# collect the second values
$two = (Get-NcPerfData -Name aggregate -Instance VICE01_aggr1_sas -Counter user_reads,user_writes,total_transfers).counters

# an object to print results in
$result = "" | Select-Object "user_reads","user_writes","total_transfers"

# do the math for each counter...(value_at_t2 - value_at_t1) / time
$result.user_reads = (($two | ?{ $_.Name -eq "user_reads" }).value - ($one | ?{ $_.Name -eq "user_reads" }).value ) / 5
$result.user_writes = (($two | ?{ $_.Name -eq "user_writes" }).value - ($one | ?{ $_.Name -eq "user_writes" }).value ) / 5
$result.total_transfers = (($two | ?{ $_.Name -eq "total_transfers" }).value - ($one | ?{ $_.Name -eq "total_transfers" }).value ) / 5

# print the result
$result

And the output, remember this is a per second average over the time between polls (5 seconds in this instance):

user_reads user_writes total_transfers
---------- ----------- ---------------
      47.4        18.6            81.6

We can modify this slightly to get a per-second report for an aggregate:

$aggregate = "VICE01_aggr1_sas"
$waitSeconds = 1

Write-Host "user_reads user_writes total_transfers"
Write-Host "---------- ----------- ---------------"

# collect the first values
$one = (Get-NcPerfData -Name aggregate -Instance $aggregate -Counter user_reads,user_writes,total_transfers).counters

while ($true) {
    # wait a bit
    Start-Sleep -Seconds $waitSeconds

    # collect the second values
    $two = (Get-NcPerfData -Name aggregate -Instance $aggregate -Counter user_reads,user_writes,total_transfers).counters

    # an object to print results in
    $result = "" | Select-Object "user_reads","user_writes","total_transfers"

    # do the math for each counter...(value_at_t2 - value_at_t1) / time...and print
    $result.user_reads = (($two | ?{ $_.Name -eq "user_reads" }).value - ($one | ?{ $_.Name -eq "user_reads" }).value ) / $waitSeconds
    $result.user_writes = (($two | ?{ $_.Name -eq "user_writes" }).value - ($one | ?{ $_.Name -eq "user_writes" }).value ) / $waitSeconds
    $result.total_transfers = (($two | ?{ $_.Name -eq "total_transfers" }).value - ($one | ?{ $_.Name -eq "total_transfers" }).value ) / $waitSeconds

    # format the output and display it
    "{0,10} {1,11} {2,15}" -f $result.user_reads,$result.user_writes,$result.total_transfers

    # set the starting values for the next iteration
    $one = $two
}

Giving us an easy to read, per second, output of the number of reads, writes, and total transfers for our aggregate…

user_reads user_writes total_transfers
---------- ----------- ---------------
       102           0             102
         0           0               0
         1           0               1
         0           0               0
         7          26              89
         1          40              58

Performance Monitoring is Fun!

This has been just a short introduction to performance monitoring of a cDOT system using the PowerShell Toolkit. There is a huge number of things that can be monitored, and you can choose to display the information however you like…maybe a real-time report of performance for troubleshooting, intermittent collection to go into a summary report, collection at regular intervals to feed into a trend analysis tool.

Please reach out to me using the comments below or the NetApp Community site with any questions about how to collect performance information from your systems.

The post cDOT Performance Monitoring Using PowerShell appeared first on The Practical Administrator.

cDOT Environment Monitoring Using PowerShell

$
0
0

Environmental information, for example temperature, fan speed, etc., provide critical information about the health of your clustered Data ONTAP system. Depending on your version of ONTAP, you can query the environmental information different ways to find out the status.

With ZAPI version 1.21 and above (cDOT 8.2.3+) the environment-sensors-get-iter API exists, which makes it excessively easy to collect environmental information about the controllers. We can take the same approach with environmental sensors as performance information:

Querying ZAPI Directly

Using the Invoke-NcSystemApi cmdlet we can execute ZAPI directly. Don’t be intimidated by this, it’s quite easy! To view all of the sensor data, we can execute the Invoke-NcSystemApi cmdlet. Because the API is an iter it means that there is the potential for not all of the results to be returned at once, so we must iterate over the API until all of them are returned. The cluster knows where to start by using the “next-tag” item:

$sensors = Invoke-NcSystemApi '<environment-sensors-get-iter></environment-sensors-get-iter>'
$result = $sensors.results.'attributes-list'.'environment-sensors-info'

while ($true) {
    if ($sensors.results.'next-tag') {
        $tag = $sensors.results.Item('next-tag').InnerXml

        $sensors = Invoke-NcSystemApi "<environment-sensors-get-iter><tag>$($tag)</tag></environment-sensors-get-iter>"
        $result += $sensors.results.'attributes-list'.'environment-sensors-info'
    } else {
        break
    }
}

The $result variable now contains an array of XML objects with the results. PowerShell conveniently turns the ZAPI response (which is XML) into an object which can be accessed using the traditional dot notation. To view the returned data:

$result

Which shows us a long list (depending on how many controllers you have). Here is a snippet from my cluster:

discrete-sensor-state  : normal
discrete-sensor-value  : GOOD
node-name              : VICE-01
sensor-name            : PSU2
sensor-type            : fru
threshold-sensor-state : normal

discrete-sensor-state  : normal
discrete-sensor-value  : GOOD
node-name              : VICE-01
sensor-name            : PSU1
sensor-type            : fru
threshold-sensor-state : normal

discrete-sensor-state  : normal
discrete-sensor-value  : GOOD
node-name              : VICE-01
sensor-name            : Fan3
sensor-type            : fru
threshold-sensor-state : normal

Finding the Desired Sensor(s)

Much like with performance reporting, there are three levels of detail:

  • Sensor Type
  • Sensor Name
  • Sensor Value(s)

To show the list of all the different sensor types we first query the API, as above, then we find all the unique sensor types:

$result.'sensor-type' | Sort-Object | Get-Unique

This results in eight different sensors returned from my system.

battery_life
counter
current
discrete
fan
fru
thermal
voltage

For the full list we can simply look to the documentation:

  • “fan” – FAN RPM sensors
  • “thermal” – Temerature sensors
  • “voltage” – Voltage measurement sensors
  • “current” – Current measurement sensors
  • “battery-life” – Sensors report battery life
  • “discrete” – Discrete sensors
  • “fru” – FRU sensors
  • “nvmem” – Sensors on the NVMEM module
  • “counter” – Sensors report in counters
  • “minutes” – Sensors report by minutes
  • “percent” – Sensors report in percentage
  • “agent” – Sensors on or throught the Agent device
  • “unknown” – Unknown sensors

Going down to the next level, we want to find the sensor names for each type. Let’s look at an example using the thermal type:

($result | ?{ $_.'sensor-type' -eq "thermal" }).'sensor-name' | Sort-Object | Get-Unique

The sensor type thermal has 14 sensors:

Bat Temp
CPU0 Temp Margin
CPU1 Temp Margin
In Flow Temp
IO Mid1 Temp
IO Mid2 Temp
LM56 Temp
NVMEM Bat Temp
Out Flow Temp
PCI Riser_R Temp
PCI Slot Temp
PSU1 Temp
PSU2 Temp
Smart Bat Temp

And, finally, we can get the value(s) we’re interested in:

$result | ?{ 
    $_.'sensor-type' -eq "thermal" -and $_.'sensor-name' -eq "In Flow Temp" 
} | Select-Object 'node-name','threshold-sensor-value','threshold-sensor-state'

Making It Better

Putting it all together we get this:

$sensors = Invoke-NcSystemApi '<environment-sensors-get-iter></environment-sensors-get-iter>'
$result = $sensors.results.'attributes-list'.'environment-sensors-info'

while ($true) {
    if ($sensors.results.'next-tag') {
        $tag = $sensors.results.Item('next-tag').InnerXml

        $sensors = Invoke-NcSystemApi "<environment-sensors-get-iter><tag>$($tag)</tag></environment-sensors-get-iter>"
        $result += $sensors.results.'attributes-list'.'environment-sensors-info'
    } else {
        break
    }
}

$result | ?{ 
    $_.'sensor-type' -eq "thermal" -and $_.'sensor-name' -eq "In Flow Temp"
     
} | Select 'node-name','threshold-sensor-value','value-units','threshold-sensor-state'

The result, for my cluster, looks like this:

node-name threshold-sensor-value value-units threshold-sensor-state
--------- ---------------------- ----------- ----------------------
VICE-01   30                     C           normal
VICE-02   28                     C           normal
VICE-03   27                     C           normal
VICE-04   28                     C           normal
VICE-05   28                     C           normal
VICE-06   29                     C           normal
VICE-07   30                     C           normal
VICE-08   29                     C           normal

Of course, this is all terribly inefficient since it collects all of the sensor information for each query…about 100 per node. We can narrow down the scope of each query using the API which will make things much faster. For example, we can limit the the sensor type and sensor name by modifying the ZAPI appropriately.

$sensors = Invoke-NcSystemApi '
<environment-sensors-get-iter>
    <sensor-type>thermal</sensor-type>
    <sensor-name>In Flow Temp</sensor-name>
</environment-sensors-get-iter>'

$result = $sensors.results.'attributes-list'.'environment-sensors-info'

$result | Select 'node-name','threshold-sensor-value','value-units','threshold-sensor-state'

This results in a much faster call (about 2 seconds vs 12 seconds), and a lot less PowerShell for filtering out unwanted objects. The result is exactly the same as above, but only those 8 results are returned instead of the original 790, which means we don’t have to work as hard for the iteration either.

Please reach out to me using the comments below or the NetApp Community site with any questions about how to collect environmental information from your systems.

The post cDOT Environment Monitoring Using PowerShell appeared first on The Practical Administrator.

NetApp PowerShell Toolkit: Aggregate Overcommitment Report

$
0
0

I recently encountered a posting on the NetApp Community asking about, among other things, allocated capacity for an aggregate. As you can see in my response, I created a quick scriptlet which displays the information regarding total volume capacity allocated, but this is only part of the potentially thin provisioned capacity. LUNs can also be thin provisioned inside the volume. Additionally, some may find how much overcommitment exists with no storage efficiency applied as well (this can help with IOPS density calculations, for example).

To address this, I created a function which will display the total, used, saved, and committed capacity for an aggregate…

Show Overcommitment for Non-Root Aggregates

Get-NcAggr | ?{ $_.Name -notlike "*root" } | Get-NcAggrOvercommitReport | Format-Table -AutoSize

aggr_overcommit_report_1

Show Aggregates Which Have Overcommitment > 200%

Get-NcAggr | ?{ $_.Name -notlike "*root" } | Get-NcAggrOvercommitReport | ?{ $_.CommittedPercent -gt 200 }

aggr_overcommit_report_2

The code is straightforward, simply checking to see if there are LUNs in the volumes which have allocation which exceeds the volume size. If so, we use that as the value for the volume instead of it’s actual size.

Since we’re potentially querying a lot of information (# aggregates * # volumes * # of LUNs), it’s a good idea to use the template concept to choose what information is returned from the ZAPI queries. This not only substantially reduces the amount of information traversing the network, but also reduces execution time.

#
# Aggregate overcommitment report
#
# Shows the cumulative total of allocated space for thin provisioned
# LUNs and volumes for an aggregate.
#
function Get-NcAggrOvercommitReport {
    param(
        [Parameter(
            Mandatory=$true,
            ValueFromPipeline=$true,
            ValueFromPipelinebyPropertyName=$true
        )]
        [Alias("Name")]
        [String[]]
        $Aggregate
    )
    process {
        # get the aggregate object
        $aggrQuery = Get-NcAggr -Template
        Initialize-NcObjectProperty -Object $aggrQuery -Name Name,AggrSpaceAttributes

        $aggr = Get-NcAggr -Name $Aggregate

        $allocated = 0
        $saved = 0

        # loop through the volumes for this aggregate
        $volQuery = Get-NcVol -Template
        Initialize-NcObjectProperty -Object $volQuery -Name Name,Vserver,VolumeSpaceAttributes,VolumeSisAttributes

        # determine the total allocated for each volume
        foreach ($volume in (Get-NcVol -Aggregate $aggr -Attributes $volQuery)) { 
            # start with the size of the volume
            $volAllocated = $volume.VolumeSpaceAttributes.SizeTotal

            # sum the size of all LUNs in the volume
            #$lunQuery = Get-NcLun -Template
            #Initialize-NcObjectProperty -Object $lunQuery -Name Size

            $lunAllocated = ((Get-NcLun -Volume $volume).Size | Measure-Object -Sum).Sum

            # if there are thin provisioned LUNs which exceed the size
            # of the volume, then use that size as the total for this
            # volume
            if ($lunAllocated -gt $volAllocated) {
                $volAllocated = $lunAllocated
            }

            # add this volume to the total
            $allocated += $volAllocated

            # add the amount of space saved for this volume to the total
            #$saved += (Get-NcEfficiency -Volume $volume.Name -Vserver $volume.Vserver).Returns.Total
            $saved += $volume.VolumeSisAttributes.TotalSpaceSaved
        }

        $result = "" | Select Name,Total,Used,UsedPercent,Saved,SavedPercent,Committed,CommittedPercent
        
        $result.Name = $aggr.Name

        # total TB in the aggr
        #$result.TotalTB = [Math]::Round($aggr.AggrSpaceAttributes.SizeTotal / 1TB, 2)
        $result.Total = ConvertTo-FormattedNumber -Value $aggr.AggrSpaceAttributes.SizeTotal -Type DataSize -NumberFormatString "0.00"

        # total TB used/consumed
        #$result.UsedTB = [Math]::Round($aggr.AggrSpaceAttributes.SizeUsed / 1TB, 2)
        $result.Used = ConvertTo-FormattedNumber -Value $aggr.AggrSpaceAttributes.SizeUsed -Type DataSize -NumberFormatString "0.00"

        # % of capacity consumed
        $result.UsedPercent = [Math]::Round((($aggr.AggrSpaceAttributes.SizeUsed) / $aggr.AggrSpaceAttributes.SizeTotal) * 100, 2)
        
        # total TB saved by efficiency
        #$result.SavedTB = [Math]::Round($saved / 1TB, 2)
        $result.Saved = ConvertTo-FormattedNumber -Value $saved -Type DataSize -NumberFormatString "0.00"

        # % of capacity saved by efficiency
        $result.SavedPercent = [Math]::Round(($saved / $aggr.AggrSpaceAttributes.SizeTotal) * 100, 2)
        
        # total TB committed, including thin LUNs and thin Vols
        #$result.CommittedTB =  [Math]::Round($allocated / 1TB, 2)
        $result.Committed = ConvertTo-FormattedNumber -Value $allocated -Type DataSize -NumberFormatString "0.00"

        # % of capacity committed, > 100% means the aggr is overcommitted
        $result.CommittedPercent = [Math]::Round(($allocated / $aggr.AggrSpaceAttributes.SizeTotal) * 100, 2)
        
        # send the result down the pipeline
        Write-Output $result 
    }
}

The post NetApp PowerShell Toolkit: Aggregate Overcommitment Report appeared first on The Practical Administrator.

NetApp PowerShell Toolkit – Templates

$
0
0

There’s one particular part of the NetApp PowerShell Toolkit which is not frequently used, but is extremely powerful. Templates can be created for many of the object types which are used to create a query for specific objects, or for limiting the amount of information returned from a particular cmdlet invocation.

To get started, we first need to initialize the object for our query or attribute limiting template. To do this we use the -Template parameter to our cmdlet.

# create an empty Aggregate object template
$aggrTemplate = Get-NcAggr -Template

If we were to look at this object it is empty:

netapp_powershell_templates_5

Many of the properties associated with an object, such as an aggregate, volume, or LUN, are objects themselves. If we want to use a property of a child object as the query filter then we need to initialize that property in the template object.

# create an empty Aggregate object template
$aggrTemplate = Get-NcAggr -Template

# initialize a property of the template
Initialize-NcObjectProperty -Object $aggrTemplate -Name AggrRaidAttributes

# alternatively, initialize all properties during template creation
$aggrTemplate = Get-NcAggr -Template -Full

Our template object now has a property object which has been populated:

netapp_powershell_templates_6

At this point we’re ready to use the template, let’s look at how to use it as a query or to limit the attributes returned.

Query / Filter Templates

Using query templates means that filtering happens on the NetApp, not the client side. If you execute Get-NcVol against your cluster which has thousands of volumes it may take a bit for that to execute, and when it does your client will have to work hard to process the result and turn it into objects. This is even more important when you’re looking for volumes with specific properties…instead of all volumes being returned and then piping to Where-Object to select a handful.

# create an empty Aggregate object template
$aggrTemplate = Get-NcAggr -Template

# initialize a property of the template
Initialize-NcObjectProperty -Object $aggrTemplate -Name AggrRaidAttributes

# specify the value of the property we want to filter
# in this case we only want hybrid aggregates
$aggrTemplate.AggrRaidAttributes.IsHybrid = $true

# execute the query
Get-NcAggr -Query $aggrTemplate

At this point the returned objects would be exactly as expected, they contain all properties of a standard Get-* cmdlet invocation.

Queries without the template

There is also a way to shortcut doing a query using hash tables. This makes it super easy to do queries on the NetApp side and eliminate work from the client.

# query for volumes on a particular aggregate
Get-NcVol -Query @{Volume=$volName}

# we can nest the hashes as well for those attributes which are objects
# for example, get all volumes which are not node root volumes
Get-NcVol -Query @{VolumeStateAttributes=@{IsNodeRoot=$false}}

Query with wildcards and relative numbers

The extremely simple examples above use static values for the queried properties, but we can use wildcards and other operators on the values as well.

# get all volumes more than 10TB in size
Get-NcVol -Query @{TotalSize=">$(10TB)"}

# get volumes with root in the name
Get-NcVol -Query @{Name="*root*"}

# get volumes without root at the end of the name
Get-NcVol -Query @{Name="!*root"}

# get disks which are not a part of an aggregate
Get-NcDisk -Query @{DiskRaidInfo=@{ContainerType="!aggregate"}}

# non-root volumes which are between 100GB and 1TB in size
Get-NcVol -Query @{VolumeStateAttributes=@{IsNodeRoot=$false};TotalSize="$(100GB)..$(1TB)";}

The list of query operators we can use here is the same as at the CLI. I haven’t show them here, but OR (|), less than or equal to (<=), and greater than or equal to (>=) are all available as well.

Attribute Limiting Templates

For many of the objects returned by the Toolkit there is a huge amount of information available. This is useful when we don't know what information we're looking for, or if we want to explore around and see what information is in the result. But, if we are executing a script which doesn't need all that extra info it can greatly speed up execution by reducing the information returned...not to mention remove that extra work from the system on the other end.

# create an empty Aggregate object template
$aggrTemplate = Get-NcAggr -Template

# using this empty object will return only the bare minimum properties 
# for each aggregate object, which is really only the name
Get-NcAggr -Attributes $aggrTemplate

netapp_powershell_templates_4

Returning only the name is helpful if we only need that information, for example if we're piping into a Foreach-Object loop and querying for volumes per aggregate. Notice that with this empty template none of the other properties in the output are populated.

If we want additional properties we use the same Initialize-NcObjectProperty cmdlet to specify the properties.

# create an empty Aggregate object template
$aggrTemplate = Get-NcAggr -Template

# initialize the AggregateSpaceAttributes attribute to get only space information
Initialize-NcObjectProperty -Object $aggrTemplate -Name AggrSpaceAttributes

# we can now retrieve the aggregates with only the space information returned
Get-NcAggr -Attributes $aggrTemplate

# note that the shortcut method with hashes works here too
Get-NcAggr -Attributes @{AggrSpaceAttributes=@{}}

netapp_powershell_templates_3

Templates are awesome!

Templates are a powerful feature which can help you to significantly speed up the execution of your scripts. For proof, here are two examples where filtering on the storage side reduces the execution time by 75%, and returning only the properties needed reduces execution time by almost 85%!

netapp_powershell_templates_1

netapp_powershell_templates_2
 

The post NetApp PowerShell Toolkit – Templates appeared first on The Practical Administrator.

NetApp PowerShell Toolkit: Authentication

$
0
0

There are multiple ways to do authentication to NetApp systems when using the PowerShell Toolkit. This ranges from the simple and obvious one-time connection, to securely storing credentials for future use. Saving credentials can be useful when executing scripts from a host non-interactively, such as with scheduled tasks or triggered through another script.

Connecting to a Single Controller

The Connect-NcController is the standard method of connecting to a clustered Data ONTAP controller. Connect-NaController is the 7-mode equivalent and works identically. Additionally, the same credential rules apply for the Invoke-NcSsh and Invoke-NaSsh cmdlets as well.

Arguably the most common method of connecting to a controller is by simply providing the hostname:

# this will attempt to connect to the specified controller using stored credentials, or if none
# are found, will prompt for credentials.  it will also default to HTTPS, with a fallback to HTTP
Connect-NcController $myController

If you are connecting to an SVM’s management interface this will work as expected, though some cmdlets won’t work because of the limited scope. If you want to connect to an SVM by tunneling through the cluster management interface, use the -Vserver parameter.

Connect-NcController $clusterMgmtLif -Vserver $SvmName

However, there are a number of parameters which change the default behavior.

# force prompt for credentials
Connect-NcController $myController -Credential (Get-Credential)

# use HTTPS or fail to connect
Connect-NcController $myController -HTTPS

# use HTTP or fail
Connect-NcController $myController -HTTP

Connecting to Multiple Controllers

After connecting to a cluster using the Connect-NcController cmdlet, the connection is stored in the variable $global:CurrentNcController and is the default used for all connections. However, we can modify this behavior in several useful ways if desired.

  • Don’t save the connection to $global:CurrentNcController

    This is useful when you will be connecting to multiple clusters/SVMs and want to specify which one to execute each command against.

    # connect to the first cluster/SVM
    $favoriteSvm = Connect-NcController $clusterMgmtIP -Vserver Favorite -Credential $credential -Transient
    
    # connect to the second cluster/SVM
    $hatedSvm = Connect-NcController $clusterMgmtIP -Vserver Hated -Credential $credential -Transient
    
    # execute cmdlets against one or the other
    Get-NcVol -Controller $favoriteSvm | Set-NcVolSize -NewSize +20% -Controller $favoriteSvm
    
    Get-NcVol -Controller $hatedSvm | Set-NcVol -Offline -Controller $hatedSvm | Remove-NcVol -Confirm:$false -Controller $hatedSvm

  • Multiple values in $global:CurrentNcController

    Sometimes it’s helpful to connect to multiple clusters or SVMs simultaneously. This will cause each cmdlet to be executed against all values in the $global:CurrentNcController array in succession.

    # connect to the first cluster/SVM
    Connect-NcController $clusterMgmtIP -Vserver Favorite -Credential $credential
    
    # connect to the second (or more) cluster/SVM
    Connect-NcController $clusterMgmtIP -Vserver SecondFavorite -Credential $credential -Add
    
    # execute tasks against both clusters/SVMs
    Get-NcVol
    
    # execute a task against one or the other
    Get-NcVol -Controller $global:CurrentNcController[0]
    Get-NcSnapshot -Controller $global:CurrentNcController[1]

Providing Credentials

By default the Connect-NcController cmdlet will check for stored credentials and, if none are found, fallback to prompting for them. We can work around this a few different ways.

  • Use a variable in your script
    #
    # store the credential in a variable for re-use
    #
    $credential = Get-Credential
    
    Connect-NcController $myFavoriteController -Credential $credential
    # do something using this controller
    
    Connect-NcController $myHatedController -Credential $credential
    # the first controller will automatically be disconnected. now do something
    # with the second controller.
  • Using the Add-NcCredential cmdlet
    #
    # store the credential using the PowerShell Toolkit
    #
    Add-NcCredential -Controller $myController -Credential (Get-Credential)
    
    # at this point, $myController can be connected to now and in the future, by the current system
    # user, without having to provide credentials again.  they are stored securely on the system, 
    # and, by default, are only accessible to the user who executed the Add-NcCredential cmdlet.
    
    # to make the stored credentials available to anyone on the system, use the -SystemScope 
    # parameter. note that any user on the system would be able to connect to the system with the 
    # stored credential, so be careful when using this parameter.
    Add-NcCredential -Controller $myController -SystemScope -Credential (Get-Credential)
  • Using the Export-Clixml cmdlet
    #
    # store the creds in a secure manner, then retrieve them.  note that only the user
    # who created the credential object will be able to read it
    #
    $credential | Export-Clixml ./credential.xml
    
    # retrieve them for use
    Connect-NcController $controller -Credential (Import-Clixml ./credential.xml)
  • Using Plain Text
    # 
    # note that this is by far the least secure method
    #
    $username = 'admin'
    $password = 'P@s$w0rd'
    
    $ssPassword = ConvertTo-SecureString -String $password -AsPlainText -Force
    
    $credential = New-Object System.Management.Automation.PSCredential $username,$ssPassword
    
    Connect-NcController $myController -Credential $credential

The post NetApp PowerShell Toolkit: Authentication appeared first on The Practical Administrator.


NetApp PowerShell Toolkit 101: Volume Snapshots

$
0
0

Snapshots are one of the core features of ONTAP, and something that many, many people rely on every day to protect their data from accidental (or malicious…) deletion and corruption. The NetApp PowerShell Toolkit can help us to manage the configuration of snapshot policies, the application of those policies to volumes, and creating/deleting/reverting snapshots too.

This post will cover:

  • Snapshots
    • Management
    • Reporting
    • Snap Reserve
  • Snapshot Policies
  • Snapshot Autodelete
  • Recovering Data

Snapshots

Managing Snapshots

  • Show snaps for a volume
    Get-NcVol $volName | Get-NcSnapshot
  • Create a snapshot
    Get-NcVol $volName | New-NcSnapshot $snapName
  • Delete a snapshot
    # delete a specific snapshot
    Get-NcVol $volName | Get-NcSnapshot $snapName | Remove-NcSnapshot
    
    # delete all snapshots for a volume
    Get-NcVol $volName | Get-NcSnapshot | Remove-NcSnapshot
    
    # delete all snapshots, for all volumes, which match a name pattern
    $pattern = "weekly"
    Get-NcSnapshot | ?{ $_.Name -match $pattern } | Remove-NcSnapshot

Snapshot Reporting

  • Show volumes with no snapshot protection
    I originally created this snippet as a response to this NetApp Communities question. It returns non-root data volumes (not data protection volumes) which have the volume option nosnap enabled or have a snapshot policy of none.

    Get-NcVol | ?{ 
        # get non-root volumes
        $_.VolumeStateAttributes.IsNodeRoot -eq $false `
        -and
            # which are rw (this will exclude SnapMirror, etc.)
            $_.VolumeIdAttributes.Type -eq "rw" `
        -and 
        (
            # with "nosnap" turned on
            (($_ | Get-NcVolOption -Hashtable).value.nosnap -eq "on") `
            -or 
            # or with snapshot policy set to none
            ($_.VolumeSnapshotAttributes.SnapshotPolicy -eq "none") 
        )
    }

  • Show the oldest snapshot for a volume

    Get-NcVol $volumeName| Get-NcSnapshot | `
      Sort-Object -Property Created | Select-Object -First 1

  • Show snapshots more than X days old

    $daysAgo = 14
    
    Get-NcSnapshot | Where-Object {
        # multiply the days by -1 to go backward.  if the value was
        # positive it would be in the future
        $_.Created -lt ((Get-Date).AddDays($daysAgo * -1))
    }

  • Show the cumulative snapshot usage for one (or more) volumes

    # single volume
    (Get-NcVol $volumeName).VolumeSpaceAttributes.SizeUsedBySnapshots | `
      ConvertTo-FormattedNumber
    
    # total snapshot space used for all volumes for a particular SVM
    ((Get-NcVserver $svmName | Get-NcVol).VolumeSpaceAttributes.SizeUsedBySnapshots | `
      Measure-Object -Sum).Sum | ConvertTo-FormattedNumber

  • Show volumes with a dependent/busy snapshot

    Get-NcSnapshot | ?{ $_.Dependency -ne $null }

Snap Reserve

Snap reserve is the amount of space in the volume which has been set aside for snapshot data…i.e. the data which is changed. The size of the snapshot is “contained” in this capacity, and not deducted from the available space in the volume.

  • Show snap reserve for a volume

    Get-NcVol $volName | Get-NcSnapshotReserve

  • Set the snap reserve for a volume

    Get-NcVol $volName | Set-NcSnapshotReserve -Percentage 10

  • Show volumes with no snap reserve

    # using a query against volumes
    Get-NcVol -Query @{ VolumeSpaceAttributes = @{ SnapshotReserveSize = 0 } }
    
    # or, using the snap reserve cmdlet
    Get-NcSnapshotReserve | ?{ $_.Percentage -eq 0 }

  • Show volumes with snap reserve > X percent
    # the percentage threshold
    $percent = 5
    
    # using a query
    Get-NcVol -Query @{ VolumeSpaceAttributes = @{ PercentageSnapshotReserve = ">$($percent)" } }
    
    # using the snap reserve cmdlet
    Get-NcSnapshotReserve | ?{ $_.Percentage -gt $percent }
  • Show volumes with snapshots exceeding the snap reserve

    Get-NcVol | Where-Object {
        $_.VolumeSpaceAttributes.SizeUsedBySnapshots -gt $_.VolumeSpaceAttributes.SnapshotReserveSize
    }

Managing Snapshot Policies

The snapshot policy is what determines when the ONTAP system will automatically create a snapshot and how long to retain it.

  • Show snapshot policy for all volumes

    Get-NcVol | Select-Object @{N="Name"; E={ $_.Name }},@{N="Snapshot Policy";E={ $_.VolumeSnapshotAttributes.SnapshotPolicy }}

  • Show volumes with a particular policy

    Get-NcVol -Query @{VolumeSnapshotAttributes=@{SnapshotPolicy=$policyName}}

  • Create a policy with a custom schedule

    #
    # create custom cron schedule(s) for the policy
    #
    
    # snapshot every two hours
    Add-NcJobCronSchedule -Name c2hour -Hour 0,2,4,6,8,10,12,14,16,18,20,22
    
    # snapshot every day at midnight
    Add-NcJobCronSchedule -Name cDaily -Day -1 -hour 0
    
    # snapshot every Sunday at midnight
    Add-NcJobCronSchedule -Name cWeekly -DayOfWeek 6
    
    # snapshot every month, on the first, at midnight
    Add-NcJobCronSchedule -Name cMonthly -Month -1 -Day 1
    
    # snapshot every year, January first at midnight
    Add-NcJobCronSchedule -Name cYearly -Month 0 -Day 1 -Hour 0 -Minute 0
    
    #
    # create the snapshot policy, add the first schedule, keeping twelve
    # bi-hourly snapshots (one day's worth)
    #
    New-NcSnapshotPolicy -Name Gold -Schedule c2hour -Count 12
    
    #
    # add the remaining schedules to complete the policy
    #
    
    # keep seven daily snapshots
    Add-NcSnapshotPolicySchedule -Name Gold -Schedule cDaily -Count 7
    
    # keep four weekly snapshots
    Add-NcSnapshotPolicySchedule -Name Gold -Schedule cWeekly -Count 4
    
    # keep one yearly snapshot
    Add-NcSnapshotPolicySchedule -Name Gold -Schedule cYearly -Count 1

  • Change the snapshot policy for a volume
    $query = @{
        Name = $volName
    }
    
    $attributes = @{
        VolumeSnapshotAttributes = @{
            SnapshotPolicy = $policyName
        }
    }
    
    Update-NcVol -Query $query -Attributes $attributes

Managing Snapshot AutoDelete

Snapshot AutoDelete is a protection mechanism, meant to prevent your volume from running out of space from oversized snapshots. There are a number of settings associated with AutoDelete. The names used for the ClusterShell and from the PowerShell toolkit are slightly different, I’ve noted the differences below.

  • CLI = Enabled, PSTK = statetrue/false (CLI) or on/off (PSTK), this indicates whether AutoDelete is enabled for the volume.
  • Commitment – How aggressive should AutoDelete be when removing snapshots? try = only delete snapshots which are not “in use” or locked by FlexClone, SnapMirror, etc. disrupt = will allow AutoDelete to remove data protection (SnapMirror, etc.) snapshots. destroy = will allow AutoDelete to remove snapshots used by FlexClone.
  • Trigger – What causes the AutoDelete action to kick off? volume = when volume capacity crosses the (configurable) threshold. snap_reserve = when the snap reserve is nearly full. space_reserve = when the reserved space in the volume is nearly full.
  • Target Free Space – AutoDelete will stop deleting snapshots when free space reaches this percentage.
  • Delete Ordernewest_first or oldest_first, the order which snapshots will be deleted. Generally speaking, oldest_first will result in the most space reclaimed.
  • Defer Deletescheduled = delete snapshots taken by the snapshot policy last. user_created = delete user created snapshots last. prefix = delete snapshots with the specified prefix last. none = don’t defer any, just delete in the specified delete order.
  • CLI = Defer Delete Prefix, PSTK = prefix – The prefix used when the defer delete value is prefix.
  • Destroy List – This list of services which can be destroyed if the backing snapshot is removed. This is the corollary for the commitment value of destroy The default is none, which is the safest option. Refer to the documentation for specifics.

With an understanding of the options, let’s look at how to query and modify AutoDelete settings.

  • Show the AutoDelete policy for a volume
    Get-NcVol $volName | Get-NcSnapshotAutodelete
  • Show all volumes with autodelete enabled/disabled
    # setting IsAutodeleteEnabled to $true will show volumes with autodelete enabled,
    # setting to $false will show volumes with autodelete disabled
    Get-NcVol -Query @{ VolumeSnapshotAutodeleteAttributes = @{ IsAutodeleteEnabled = $true } }
  • Enable/disable for a volume
    # enable
    Get-NcVol $volName | Set-NcSnapshotAutodelete -Key state -value on
    
    # disable
    Get-NcVol $volName | Set-NcSnapshotAutodelete -Key state -value off
  • Set multiple options for a volume
    $options = @{
        'commitment' = 'try';
        'defer_delete' = 'scheduled';
        'delete_order' = 'oldest_first';
        'state' = 'on'
        'target_free_space' = 20;
        'trigger' = 'volume';
    }
    
    Get-NcVserver $svmName | Get-NcVol | %{
        $volume = $_
    
        $options.GetEnumerator() | %{
            $volume | Set-NcSnapshotAutodelete -Key $_.Key -Value $_.Value
        }
    }

Recovering Data

Revert a snapshot

# note that if you revert a node's root volume
# it will cause the node to reboot
Get-NcVol $volumeName | Restore-NcSnapshotVolume -SnapName $snapName

Restore a file using FlexClone

I originally posted a version of this to the NetApp Communities site. This will create a FlexClone of the file from a snapshot into the current file system. Note that this is not the same thing as a single file snap restore, which uses the Restore-NcSnapshotFile cmdlet.

$svmName = "mySVM"

$volumeName = "myFavoriteDatastore"
$sourceSnap = "weekly.0"

$files = @("vc.vmx", "vc.vmdk", "vc.vmdk-flat")
$sourceFolder = "/vc/"
$destinationFolder = "/vc_restored/"

$files | %{ 
    $splat = @{
        # the name of the volume which holds the file(s)
        'Volume' = $volumeName;

        # the path to the file in the volume
        'SourcePath' = "$($sourceFolder)$($_)";

        # the path for the restored file
        'DestinationPath' = "$($destinationFolder)$($_)";

        # if false, the clone will be thin.  if omitted, the
        # policy will be inherited from the original
        'SpaceReserved' = $false;

        # the source snapshot
        'Snapshot' = $sourceSnap;

        # this command must be targeted at a SVM
        'VserverContext' = $svmName;
    }

    New-NcClone @splat
}

The post NetApp PowerShell Toolkit 101: Volume Snapshots appeared first on The Practical Administrator.

Viewing all 16 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>