VMAX – FAST

VMAX:

The VMAX configuration elements are slightly different due to pooling architectural differences:

  • disk groups: collection of like type disks (I.E. 200GB EFD)
  • Virtual Pools: pooled storage capacity formed from disk groups
  • FAST-VP Tiers: association of a tier with the previously mentioned pooled capacity; multiple pools of like drive type & RAID protection type can be associated with a single FAST-VP Tier
  • FAST-VP Policies: auto-tiering policy specifying how much of each tier can be utilized I.E. 100/100/100 would tell the system that 100% of the EFD, 100% of the FC and 100% of the SATA tier can be utilized for a given LUN/Storage Group
  • storage groups: collection of host LUN

VMAX Pool Abstractions

On the VMAX each type of drive is associated with a disk group:

VMAX Disk Groups

Then virtual pools can be created from each of the drive types. A RAID protection and capacity are specified for each pool (can be full, or a subset of the entire system capacity based on the disk groups):

VMAX Pools

Next FAST-VP Tiers need to be created and associated with the Virtual Pools. Typical would be an EFD Tier, a FC Tier, and a SATA Tier. The example below shows the creation of an “EFD” FAST-VP Tier.

Creating a VMAX FAST-VP Tier

VMAX FAST-VP Tiers

All 3 tiers have been created above.

Next comes the FAST Policy. This determines the % of each tier (by capacity) that a LUN can occupy. 100/100/100 is the ideal policy as it basically tells the system “I’m not placing any restrictions on the tier percentages, you decide the best place to land the data”. This gives the system full control and if the VNX had a FAST-VP Policy parameter, this would be the setting (100/100/100):

Creating a FAST Policy

idealP FAST Policy Created

Alternatively, different policies can be created such as 20/30/50 which tells the system at most 20% EFD by capacity and 30% FC by capacity can be utilized and the rest needs to live on the SATA tier. So again, the FAST Policy on the VMAX allows plenty of “nerd knob” tweaking if desired. Other uses for this would be in multi-tenant or storage as a service model where internal/external customers are paying different $/GB rates based on expected SLAs.

Next a storage group and host devices (LUN) must be created. This is as expected, but one item that needs to be specified is the initial pool binding. Meaning, which virtual pool to associate the LUN with initially. The best practice is to choose the middle / FC tier for this. Note that if  thin provisioning is used, no space is actually occupied in pool until host writes are sent. Additionally, there is a setting in VMAX 5876 code which allows new writes to land on a tier that FAST-VP decides is best based on the data collected on host IO activity. With this setting, FAST-VP may decide to land the new write on SATA even though the initial binding was on FC if it deems appropriate. This avoids tracks landing on FC first, only to be moved down to SATA later (or up to EFD). This is a system wide setting called “allocate by FAST policy” and the recommendation is to enable it unless there is a good reason not to.

VMAX Create Storage Group

VMAX Create Storage Group

Next the storage group has to be associated with a FAST Policy:

VMAX Associate a FAST Policy

… and now the LUN is auto-tiered via the pools created utilizing the FAST Policy specified (100/100/100 in this case).

The FAST Policy can be changed anytime on the fly. For example it could be changed from 100/100/100 to 20/30/50 or any combination based on business needs. This gives a lot of flexibility in the management of the performance & capacity of the array.

To summarize the data movement process between a VMAX and VNX as it pertains to auto-tiering:

-VMAX: TDEV (LUN) is bound to a pool/tier (best practice FC unless low workload); after the Initial Analysis Period performance metrics are analyzed; extents are marked for promotion / demotion; data movements queued up on the DA (disk adapters); TDEV remains bound to the pool it was originally bound (for statistics purposes) regardless of where the tracks live;new host writes behavior depends on the “allocate by FAST Policy” setting.

Leave a comment