EMC VMAX 3

EMC Vmax-3 Family Specifications

Click here to download datasheet

 

The overall theme of the Mega Launch IV is “Redefine Possible” – What EMC focused on in the new VMAX³ (“to the power of 3″) was the agility and scale of the cloud model and how to redefine what is possible in the Datacenter of today’s world.
This has certainly been delivered upon with the VMAX³, incorporating a complete overhaul of the VMAX architecture with advancements made from Integrated System/Drive Bays Standardised at 24” wide racks to Front-End connectivity. Vault disk technology has been moved to dedicated Flash modules as standard, therefore there is no longer a requirement to plan for Power Vault drives on the first 5 drives per loop as explained here in an earlier post. Using Intel IVY BRIDGE processors we can now scale up to 3X faster system perfromance. The Back-End is now converted to Native SAS 6 GB/s offering 3X bandwidth to the Back-End devices utilizing 2x 4 Port SAS I/O modules per director and the Front-End now supports 16 Gbps FC using 4 Port I/O Modules (Codenamed Rainfall). Two options for High Density Disk Array Enclosures, chosing between a 60 Drive 3.5″ DAE code named Voyager allowing up to 360 drives per Engine or a 2.5″ 120 Drive DAE codenamed Viking (DAE120) which allows for up to 720 Drives per Engine. System Bay Dispersion of up to 25 meters from System Bay 1 (No Drive Bay dispersion). Three new VMAX models 100K, 200K & 400K (collectively referred to as “VG3R”) built on revolutionary technology that has been architected for Hybrid Cloud scale, with one of those technologies being the “DYNAMIC VIRTUAL MATRIX” this is an extension of the existing “Virtual Matrix” but allowing for much greater scale and more importantly much greater flexiblity and on top of that you have the “HYPERMAX OPERATING SYSTEM” which is a new Operating System based on the existing OS but designed to bring the Data Services that reside outside of the array into the VMAX Array also. An all Flash VMAX³ delivering 1 Million IOPS with sub-millisecond response times and ~600TB on a single floor tile with options to scale up to 4PB. New “Hybrid Cloud Scale” snapshot feature called SNAPVX, allows for up to 1,024 snaps per individual source without the requirement for a dedicated snapshot reserve volume.

DYNAMIC VIRTUAL MATRIX
When VMAX was first introduced it brought with it the “Virtual Matrix”, now with VMAX³ we have what is called the “Dynamic Virtual Matrix”. Leveraging the multiple CPU cores within the box with the largest 400K system we will have 384 CPU cores allowing them to be dynamically allocated to Front-End resources, to Back-End resources and to Data Services. Think of it as 3 sets of pools, there is a pool for Front-End resources, a pool for Back-End resources and a pool for Data Services and within a pool cores can be Dynamicaly assigned to their respective ports in order to cater for any particular workload. For example if you had a hot port on the Front-End traditionally this was a fixed port to CPU relationship and you were limited by the perfromance of the CPU assigned to that port. In order to fully understand the static nature of this previous design the following diagram helps depict this CPU to port relationship for the VMAX 40K:
(for 10K and 20K diagrams see the following PDF Link)

As you can see with this older design 1 Core was statically allocated per Front-End Slice and 2 Cores per Back-End Slice. This design is now gone away with the VMAX³ and has been rearchitected with the “Dynamic VIRTUAL MATRIX”:

For example if you now have a hot port with a VMAX³ Array and it requires additional CPU resources then it can pull CPU allocation from the entire Front-End pooled resources to service the required workload and the same can be achieved on a Back-End Port if the Port requires additonal CPU resources it can dynamically pull from the Back-End pooled resources. An important thing to note here is that not only do we now have the horizontal movement within the specific pools but you can also have the vertical movement of CPU resources also. For example if you have a high performing OLTP type workload and you need additional Front-End CPU resources then you can manually allocate CPU core resources from the Back-End to the Front-End. Likewise if you have a DSS workload that’s heavy on the Back-End you can allocate cores from the Front-End to the Back-End to better service that workload.

HYPERMAX OS
EMC are now introducing the “HYPERMAX OS”, this is a revolutionary step which brings forward a new version of Operating System which provides a significant enhancement to Enginuity. In addition to providing the traditional Enginuity best of breeed in Data Services, it is providing an embedded storage Hypervisor that will allow users to run other Data Services that traditionally ran outside of the storage array, such as management consoles, file gateways, cloud gateways, replication appliances, data mobility solutions, protection solutions etc.
The “Dynamic Virtual Matrix” works in conjunction with the “HYPERMAX OS”; using it’s new core mobility and core isolation functionality, the VMAX³ can run these Data Services on their own threads and on their own CPU cores so they do not interfere with the other activities that are running within the VMAX³ Array.

These Data Services can all be run within the VMAX³ from the Storage Hypervisior and protected with the VMAX³ in-built high availabilty features. The resources shall be allocated by the “Dynamic Virtual Matrix” to guarantee the required performance is achieved for the Data Service.
Thus with this new version of VMAX³, not only can we have a greater level of consolidation with the increased density of the box but we can also begin to start consolodating some of those other services that had previously run outside of the VMAX³ into the Array itself due to the power of the “HYPERMAX OS”. This helps lower CAPEX and lower OPEX costs, as energy and space is greatly reduced. Bringing these Data Services into the Array also helps with the performance of the Data Services because we are leveraging direct access to the VMAX Hardware and so reducing the latency factor associated with having these Services external to the Array. This is a great example of the VMAX³ Array combining the control & trust and the agility of the Hybrid Cloud. Obviously this does not mean we can consolodate the application workloads or network services, as of course they are perform better outside of the Storage Array.

Service Level Objectives (SLO)
All storage in the VMAX³ Array is virtually provisioned, and all of the pools are created in containers called “Storage Resource Pools (SRP)”.

This entire box has been built and optimized to manage storage in a whole new way, allowing you to provision applications to a “Service Level Objective” (SLO) and leveraging deep automation within the Array to make sure you receive guaranteed and predictable performance levels as you scale. For example if you have a variety of workloads that take individual management you now have the option of categorising these workloads by their performance requirements, instead of:
• calculating a number of different types of disk drives to cater for the workload
• determining how much flash you need
• the sku
• how to apply tiering
All these decisions are now no longer required, instead these decision points have been taken away and you will only have to enter your performance objectives i.e metric requirements. For example; do you require sub millisecond reponse times or does 1ms to 10ms response times suffice for your workload characteristics.

Based on these types of metric requirements (at GA the main priority being focused on is response time levels) a service level shall be created for each workload that you will be provisioning on the VMAX.

The system will use all this intelligence in order to use the dynamic capabilities of the VMAX so that you are guaranteed to receive the performance levels required throughout the lifecycle of the application. As the system workloads change over time and other workloads are added to the Array, the VMAX will continue to dynamically add resources in order to guarantee that you continue to get the required level of performance to match the SLO defined. This is of course provided those reources are available in the Array, if the workload increases and you need to add more resorces to the VMAX (such as adding additional flash drives) you will be alerted in advance. Alerts would be enabled by default for a fixed list of KPIs and the system will provide remediation options in order to cater for this increasing resource demand. If you have an application runnning at one Service Level, for example the Silver Service Level and this no longer satisfies the workload requirements, then you can elevate to a higher Service Level in order to meet a better performance level. This can be achieved with a single click and the VMAX will automatically assign the additional resources as available to ensure the SLO is achieved.

New High Density Disk Array Enclosures
DAEs are available in two options: a 4U enclosure that can hold 60 3.5” drives and a 3U enclosure that can hold 120 2.5” drives, with each Engine having two “loops” upon which to connect the DAEs. You may have a combination of both types of DAEs based upon DAE population rules – more on this later. Each System Bay can support from a minimum of one to a maximum of 6 DAEs of any combination while the storage bay can support from quantity of 1 up to a maximum of 8 DAEs of any combination. DAEs can be added or upgraded in single or multiple increments.

VOYAGER DAE

Connectivity
• Connectivity: 6G SAS
• Connectors: 8 x4 mini SAS connectors
Mechanical
• 4U, 19” wide NEMA, 39.5” deep
• Loaded Weight: 225 lbs
• Max. Drive Count: 60 3.5″ drives
Power/Cooling
• Max. Output Power: 1200W (15W/drive slot)
• Power Architecture: N+1
• Total Power Inputs: 4 (AC) power cords (2 on each rail)
• Cooling Mode: N+1 fans with adaptive cooling

VIKING DAE

Connectivity
• Connectivity: 6G SAS
• Connectors: 8 x4 mini SAS connectors
Mechanical
• 3U, 19” wide NEMA, 39.5” deep
• Loaded Weight: 150 lbs
• Max. Drive Count: 120 2.5” drives
Power/Cooling
• Max. Output Power: 2160W (10W/drive slot)
• Power Architecture: N+1
• Total Power Inputs: 4 (AC) power cords (2 on each rail)
• Cooling Mode: N+2 fans with adaptive cooling

New Ultra Performing Engines
Based on the Megatron Engine design EMC have delivered three new Engine models to cater for each of the new VMAX³ systems. The 4U Engine Enclosure consists of 2 Director Boards, one set of SPS (battery back-up) modules with each Director board having its own redundant power and cooling, using Intel IVY BRIDGE processors.

Each engine consists of 2 Management Modules and supports 11 I/O slots per Director. Each Engine requires a minimum of 2 VTF SLICs, 2 FE SLICs and 1 DAE. Since the vaulting process on the platforms uses Flash SLICs instead of vault drives, each engine will require a set of Flash SLICs to support any instances of vaulting. The size of the Flash SLICs will be determined by the amount of cache in that system and the metadata required by the config. Flash SLICS are available in 175GB, 350GB and 700GB capacities.

VMAX³ 100K System
The 100K (Lapis) uses a Megatron-lite Engine with the following specifications:
• Supports 1-2 Engines Engine (2.1GHz, 24 IVB core) 512GB, 1TB Memory
• 1440 2.5” drives
• 720 3.5” drives
• 2 TBr cache memory
• 440 TBu capacity Using 4 TB Drive
• 250K IOPs
• 64 FE ports
• Integrated service processor Management Module & Control Station (MMCS)
• InfiniBand fabric 56Gb/s Link speeds 12 Port Switch

VMAX³ 200K System
The 200K (Alexandrite) uses a Megatron Engine with the following specifications:
• Supports 1-4 Engines Engine (2.6 GHz, 32 IVB core) 512GB, 1TB, 2TB memory
• 2880 2.5” drives
• 1440 3.5” drives
• 8 TBr cache memory
• 1.8 PBu capacity Using 4 TB Drive
• 850K IOPs
• 128 FE ports
• Integrated service processor Management Module & Control Station (MMCS)
• InfiniBand fabric 56Gb/s Link speeds 12 Port Switch

VMAX³ 400K System
The 400K (Ruby) uses a Megatron-heavy Engine with the following specifications:
• Supports 1-8 Engines Engine (2.7GHz, 48 IVB core) 512GB, 1TB, 2TB Memory
• 5760 2.5” drives
• 2880 3.5” drives
• 16 TBr cache memory
• 3.8 PBu capacity Using 4 TB Drive
• 3.2M IOPs
• 256 FE ports
• Integrated service processor Management Module & Control Station (MMCS)
• InfiniBand fabric 56Gb/s Link speeds 18 Port Switch

VMAX has been repositioned from just a storage array to an enterprise data service platform – POWERFUL, TRUSTED AND AGILE, making VMAX³ the top Enterprise class Array on the market today.

 

 

 

Leave a comment