The whole story began with a tweet and a picture:
Spotted Marvin on VMware campus during a break this morning "first hyperconverged infrastructure appliance " pic.twitter.com/1iIPocjREX
— Fletcher Cocquyt (@Cocquyt) June 7, 2014
This picture in combination with rumors about Project Mystic have motivated Christian Mohn to publish an interesting blog post. Today, two and a half months later, “Marvin” or project Mystic got its final name: EVO:RAIL.
What is EVO:RAIL?
Firstly, we have to learn a new acronym: Hyper-Converged Infrastructure Appliance (HCIA). EVO:RAIL will be exactly this: A HCIA. IMHO EVO:RAIL is VMwares try to jump on the fast moving hyper-converged train. EVO:RAIL combines different VMware products (vSphere Enterprise Plus, vCenter Server, Virtual SAN and vCenter Log Insight) along with EVO:RAIL deployment, configuration and management to a hyper-converged infrastructure appliance. Appliance? Yes, an appliance. A single stock keeping unit (SKU) including hardware, software and support. To be honest: VMware will no try to sell hardware. The hardware will be provided by partners (currently Dell, EMC, Fujitsu, Inspur, NetOne and SuperMicro).
VMware Chief Technologist Duncan Epping described four advantages of EVO:RAIL in a today published blog post:
EVO:RAIL is software-defined. Based on well-known VMware products, the EVO:RAIL engine simplifies the deployment, management and configuration of the building blocks.
EVO:RAIL is simple: The EVO:RAIL engine allows you to reduce the time from rack & stack until you can power-on your first VM. You need less time for basic tasks, like creation of VMs or for the patch management of the hosts. If you need more compute or storage capacity, simply add additional 2U blocks (currently max 4 blocks > 16 nodes).
EVO:RAIL is highly resilient: A 2U block consists of four nodes. This results in a single four host vSphere cluster, with a single VSAN datastore and full support für VMware HA, DRS, FT etc. This facilitate no downtime for VMs during planned maintenance or node failures.
EVO:RAIL allows customers to choose: Customers can obtain EVO:RAIL using a single SKU from their preferred EVO:RAIL partner. The partner provides hardware, software and support for the EVO:RAIL HCIA.
Each HCIA node will provide at least:
- 2x Intel Xeon E5-2620 v2 six-core CPUs
- at least 192GB of memory
- 1x SLC SATADOM or SAS HDD as boot device
- 3x SAS 10K RPM 1.2TB HDD for the VMware Virtual SAN datastore
- 1x 400GB MLC enterprise-grade SSD for read/ write cache
- 1x Virtual SAN-certified pass-through disk controller
- 2x 10GbE NIC ports (either 10GBase-T or SFP+)
- 1x 1GbE IPMI port for out-of-band management
This results in a four node vSphere cluster with 48 cores, 768 GB RAM and 14,4 TB raw disk space on just 2U. A single block allows you to run 100 average-sized (2 vCPU, 4GB RAM, 60GB with redundancy) general-purpose VMs, or 250 View VMs (2vCPU, 2GB RAM, 32GB linked clones).
Looks like a Nutanix clone, isn’t it? Yes, it’s a HCIA like a Nutanix block. But it’s focused on VMware (you can’t run Microsoft Hyper-V or KVM on it) and it will be sold by EVO:RAIL partners. This allows VMware to use a much wider distribution channel. It will be fun to see how other hyper-converged companies will react to this announcement. Unfortunately HP isn’t listed as a HCIA partner company. But DELL is listed. Fun fact: DELL and Nutanix signed a contract in June 2014.
Strategic Relationship Significantly Expands Access and Distribution of Nutanix Solutions with Dell’s World-Class Hardware, Services and Marketing to Accelerate Adoption of Web-scale Converged Infrastructure in the Enterprise
Take a look into the “Introduction to VMware EVO: RAIL” whitepaper. There are other great blog posts about EVO:RAIL:
Feel free to follow him on Twitter and/ or leave a comment.
Latest posts by Patrick Terlisten (see all)
- Redundancy on the first hop – VRRP - August 27, 2016
- Get-MailboxDatabase doesn’t show last backup timestamp - August 17, 2016
- Disable Outlook cached mode for shared mailboxes - July 22, 2016