EMC Symmetrix DMX-4 and Symmetrix V-Max: Basic Differences

In this post we will cover some important aspects / properties / characteristics / differences between the EMC Symmetrix DMX-4 and EMC Symmetrix V-Max. It seems like a lot of users are searching on blog posts about this information.

From a high level, I have tried to cover the differences in terms of performance and architecture related to the directors, engines, cache, drives, etc

It might be a good idea to also run both the DMX-4 and V-max systems through IOmeter to collect some basic comparisons between the front end and coordinated backend / cache performance data.

Anyways enjoy this post, and possibly look for some more related data in the future post.

EMC Symmetrix DMX-4                                EMC Symmetrix V-Max

Called EMC Symmetrix DMX-4 Called EMC Symmetrix V-Max
DMX: Direct Matrix Architecture V-Max: Virtual Matrix Architecture
Max Capacity: 1 PB Raw Storage Max Capacity: 2 PB of Usable Storage
Max Drives: 1900. On RPQ: 2400 max Max Drives: 2400
EFD’s Supported EFD’s Supported
Symmetrix Management Console 6.0 Symmetrix Management Console 7.0
Solutions Enabler 6.0 Solutions Enabler 7.0
EFD: 73GB, 146GB, 200GB, 400GB EFD: 200GB, 400GB
FC Drives: 73GB, 146GB, 300GB, 400GB, 450GB FC Drives: 73GB, 146GB, 300GB, 400GB
SATA II: 500GB, 1000 GB SATA II: 1000 GB
FC Drive Speed: 10K or 15K FC Drive Speed: 15K
SATA II Drive Speed: 7.2K SATA II Drive Speed: 7.2K
Predecessor of DMX-4 is DMX-3 Predecessor of V-Max is DMX-4
DMX-4 management has got a bit easy compared to the previous generation Symmetrix Ease of Use with Management — atleast with SMC 7.0 or so called ECC lite
4 Ports per Director 8 Ports per Director
No Engine based concept Engine based concept
24 slots The concept of slots is gone
1 System bay, 9 Storage bays 1 System bay, 10 Storage bays
No engines 8 Engines in one System (serial number)
64 Fiber Channel total ports on all directors for host connectivity 128 Fiber Channel total ports on directors/engines for host connectivity
32 FICON ports for host connectivity 64 FICON ports for host connectivity
32 GbE iSCSI ports 64 GbE iSCSCI ports
Total Cache: 512GB with 256 GB usable (mirrored) Total Cache: 1024 GB with 512 GB usable (mirrored)
Drive interface speed either 2GB or 4GB, drives auto negotiate speed Drive interface speed 4GB
Green color drive LED means 2GB loop speed, Blue color drive LED means 4GB loop speed Only 4GB drive speed supported.
512 byte style drive (format) 520-byte style drive (8 bytes used for storing data check info). Remember the clarion drive styles, well the data stored in both the cases is different. The 8 bytes used with the Symmetrix V-Max are the data integrity field based on the algorithm D10-TIF standard proposal
FAST: Fully Automated Storage Tiering may not be supported on DMX-4’s (most likely since the support might come based on a microcode level rather than a hardware level) FAST: Fully Automated Storage Tiering will be supported later this year on the V-Max systems
Microcode: 5772 / 5773 runs DMX-4’s Microcode: 5874 runs V-Max
Released in July 2007 Released in April 2009
Concepts of Directors and Cache on separate physical slots / cards Concept of condensed Director and Cache on board
DMX-4 Timefinder performance has been better compared to previous generation 300% better TImefinder Performance compared to DMX-4
No IP Management interface into the Service Processor IP Management interface to the Service Processor, can be managed through the customer’s Network — IP infrastructure
Symmetrix Management Console is not charged for until (free) DMX-4 Symmetrix Management Console to be licensed at a cost starting the V-Max systems
Architecture of DMX-4 has been similar to the architecture of its predecessor DMX-3 Architecture of V-Max is completely redesigned with this generation and is completely different from the predecessor DMX-4
Microcode 5772 and 5773 has be build on previous generation of microcode 5771 and 5772 respectively Microcode 5874 has been build on base 5773 from previous generation DMX-4
No RVA: Raid Virtual Architecture Implementation of RVA: Raid Virtual Architecture
Largest supported volume is 64GB per LUN Large Volume Support: 240GB per LUN (Open Systems) and 223GB per LUN (Mainframe Systems)
128 hypers per Drive (luns per drive) 512 hypers per Drive (luns per drive)
Configuration change not as robust as V-Max Systems V-Max systems introduced the concept of concurrent configuration change allowing customers to perform change management on the V-Max systems combined to work through single set of scripts rather than a step based process.
DMX-4 does present some challenges with mirror positions Reduced mirror positions giving customers good flexibility for migration and other opportunities
No Virtual Provisioning with RAID 5 and RAID 6 devices Virtual Provisioning allowed now with RAID 5 and RAID 6 devices
No Autoprovisioning groups Concept of Autoprovisioning groups introduced with V-Max Systems
Minimum size DMX-4: A single storage cabinet system, supporting 240 drives can be purchased with a system cabinet Minimum size V-Max SE (single engine) system can be purchased with 1 engine and 360 drive max.
No concepts of Engine, architecture based on slots Each Engine consists of 4 Quad Core Intel Chips with either 32GB, 64GB or 128GB cache on each engine with 16 front-end ports with each engine. Backend ports per engine is 4 ports connecting System bay to storage bay
Power PC chips used on directors Intel Quad Core chips used on Engines
Powerpath VE support for Vsphere — Virtual machines for DMX-4 Powerpath VE supported for Vsphere — Virtual machines for V-Max
Concept of Backplane exists with this generation of storage V-Max fits in the category of Modular Storage and eliminates the bottle neck of a backplane
DMX-4 was truly sold as a generation upgrade to DMX-3 V-Max systems have been sold with a big marketing buzz around hundreds of engines, millions of IOPs, TB’s of cache, Virtual Storage
Systems cannot be federated The concept of Federation has been introduced with V-Max systems, but systems are not federated in production or customer environments yet
Directors are connected to the system through a legacy backplane   (DMX — Direct Matrix Architecture). Engines are connected through copper RAPID IO interconnect at 2.5GB speed
No support for FCOE or 10GB Ethernet No support for FCOE or 10GB Ethernet
No support for 8GB loop interface speeds No support for 8GB loop interface speeds
Strong Marketing with DMX-4 and good success Virtual Marketing for Virtual Matrix (V-Max) since the product was introduced with FAST as a sales strategy with FAST not available for at least until the later part of the year.
No support for InfiniBand expected with DMX-4 Would InfiniBand be supported in the future to connect engines at a short or long distance (several meters)
No Federation With Federation expected in the upcoming versions of V-Max, how would the cache latency play a role if you had federation between systems that are 10 to 10 meters away?
Global Cache on Global Memory Directors Global Cache on local engines chips: again as cache is shared between multiple engines, cache latency is expected as multiple engines request this IO
DMX-4 is a monster storage system The V-Max building blocks (engines) can create a much larger storage monster
256GB total vault on DMX-4 systems 200GB of vault space per Engine, with 8 engines, we are looking at 1.6TB of vault storage
Performance on DMX-4 has been great compared to its previous generation DMX, DMX2, DMX-3 IOPS per PORT of V-Max Systems

128 MB/s Hits

385 Read

385 Write
IOPS for 2 PORT of V-Max Systems

128MB/s Hits

635 Read

640 Write

V-Max performs better compared to DMX-4 FICON 2.2 x Performance on FICON compared to DMX-4 Systems.

2 Ports can have as many as 17000 IOPS on FICON

Large Metadata overhead with the amount of volumes, devices, cache slots, etc, etc A reduction of 50 to 75% overhead with the V-Max related to metadata
SRDF Technology Supported New SRDF/EDP (extended distant protection)

Diskless R21 passthrough device, no disk required for this passthrough

Symmetrix Management Console 6.0 supported, no templates and wizards Templates and Wizards within the new SMC 7.0 console
Total SRDF Groups supported 128 Total SRDF Groups supported 250
16 Groups on Single Port for SRDF 64 Groups on Single Port for SRDF
V-Max comparison on Connectivity 2X Connectivity compared to the DMX-4
V-Max comparison on Usability (Storage) 3X usability compared to the DMX-4
DMX-4 was the first version of Symmetrix where RAID6 support was rolled out RAID 6 is 3.6 times better than the DMX-4
RAID6 support on DMX-4 is and was a little premature RAID 6 on V-Max (performance) is equivalent to RAID 1 on DMX-4
SATA II performance on DMX-4 is better than V-Max SATA II drives do not support the 520-byte style. EMC takes those 8 bytes (520 — 512) of calculation for data integrity T10-DIF standard proposal and writes it in blocks or chunks of 64K through out the entire drive causing performance degradation.
SATA II performance on DMX-4 is better than V-Max The performance of SATA II drives on V-Max is bad the DMX-4 systems
Fiber Channel performance better compared to DMX and DMX-2’s. Fiber Channel performance compared to DMX-4 improved by about 36%
DMX-4 start supporting 4GB interface host connectivity Fiber Channel performance 5000 IOPS per channel
RVA not available on DMX-4 platforms RVA: Raid Virtual Architecture allows to have one mirror position for RAID volumes allowing customers to used the rest of the 3 positions for either BCV’s, SRDF, Migration, etc, etc.
No MIBE and SIB with DMX-4. Rather the DMX-4 directors are connected through a common backplane. MIBE: Matrix Interface Board Enclosure connects the Odd and the Evens or (Fabric A and Fabric B) Directors together. The SIB (System Interface Board) connects these engines together using Rapid IO
Director count goes from Director 1 on the left to Director 18 (Hex) on the right Director count goes from 1 on the bottom to 16 (F) on the top, based on each engine having 2 directors. 8 Engines, 16 Directors.
2 Directors failures if not in the same fabric or bus, rather are not DI’s (Dual Initiators) of each other will not cause a system outage or data loss / data unavailable Single engine failure (2 Directors) will not cause Data Loss / Data Unavailable and the system will not cause an outage. Failed components can be Directors, Engines, MIBE, PS’s, Fan, Cache in a single Engine or 2 directors.
Single loop outages will not cause DU Single loop outages will not cause DU

More architectural details related to drives, cache, directors, cabinets, Mibe, SIB, Service Processor to come in the V-Max architecture expansion and modularity post over the next week.


About the author

Devang Panchigar

With more than 7 Years of IT experience, Devang is currently the Director of Technology Solutions and IT Operations at Computer Data Source, Inc. Devang has held several positions in the past including Sr. Systems Engineer, Sr. Network Engineer, Technical Support Manager, Director of Storage Support & Operations. He has been responsible for creating and managing worldwide technical support teams, technology solutions team, operations management, service delivery, pre and post sales support, marketing and business planning. In his current role Devang oversees multiple aspects of the Technology Solutions Group that works with various Multinational and Fortune 500 companies providing them infrastructure services. Along with various industry certifications, Devang holds a Bachelor of Science from South Gujarat University, India and a Master of Science in Computer Science from North Carolina A&T State University.

1 Comment

  • “Single engine failure (2 Directors) will not cause Data Loss / Data Unavailable and the system will not cause an outage. Failed components can be Directors, Engines, MIBE, PS’s, Fan, Cache in a single Engine or 2 directors.”

    that's not true cause each Engine does have connected own DEAs. Loosing and Engine means the data on this DAEs is not available from other engines.

Leave a Comment