What's New in vSphere 7 Core Storage | VMware (2024)

Table of Contents
vSphere 7.0 U3 NVMe/TCP Host Scale support increase for NFS and VMFS Affinity 3.0 Improvements for CNS vVols Batch Snapshots vSphere 7.0 U2 iSCSI Path Limit increase RDM support for RHEL HA VMFS SESparse Snapshot Improvements Multiple Paravirtual RDMA (PVRDMA) adapter support Performance Improvements on VMFS NFS Improvements HPP Fast Path Support for Fabric Devices HPP as the default plugin for vSAN VOMA improvements vVols vVols Enhancements and Updates Support for Higher Queue Depth with vVols Protocol Endpoints Create larger than 4GB Config vVol vVols with CNS and Tanzu vVols with CNS and Tanzu SPBM Multiple Snapshot Rule Enhancements 32 Snapshot support for Cloud Native Storage (CNS) for First Class Disks CNS PV to vVol mapping vSphere 7.0 U1 Features and enhancements for vSphere 7.0 Update 1 VMFS Enhancements SESparse Snapshot consolidation bloat reduction. Reduced vSphere Snapshot stun time. NFS Enhancements NAS VAAI Plugins NVDK cloning of LZT disk could fail with unsupported disk type. We have updated the code to support all disk types. RDM pRDM extension with Microsoft WSFC NVMeoF NVMeoF Support for Oracle RAC vVols as Principal Storage in VCF 4.1 Support for Virtual Volumes (vVols) as principal storage in VMware Cloud Foundation 4.1 vSphere 7.0 Core Storage features and enhancements included in vSphere 7.0 NVMeoF NVMeoF Insight VMware NVMeoF Clustered VMDK Shared VMDK or Clustered VMDK in VMFS6 Demo and details on migrating WSFC using RDMs to Shared VMDK on VMFS Enabling Clustered VMDK Enabling Shared VMDK support on VMFS Datastores Perennially Reserved Flag for RDMs Using the Perennially Reserved Flag for WSFC RDMs Affinity 2.0 VMFS Affinity Manager 2.0 New Affinity 2.0 Manager End Of Availability (EOA) vFRC and CBRC 1.0 vSphere Flash Read Cache (vFRC) EOL Content-Based Read Cache (CBRC) 1.0 EOA vVols Interoperability vVols Interoperability withVMware Products vVols Support in SRM 8.3 Site Recovery Manager - SRM vVols Support for CNS Cloud-Native Storage - CNS vVols Support in vRealize Operations 8.1 vRealize Operations - vROps vVols as Supplemental Storage in VCF VMware Cloud Foundation - VCF Associated Content FAQs

vSphere 7.0 U3

Core Storage features and enhancements for vSphere 7 Update 3.

NVMe/TCP

What's New in vSphere 7 Core Storage | VMware (1)

NVMe over fabric extends NVMe from local storage to shared network storage. With the release of vSphere 7, the supported protocols for NVMeoF were FC and RDMA. With the release of vSphere 7 U3, we are adding support for TCP. One of the benefits of NVMe/TCP is there is no need for specialized HBAs or RNICs (RDMA NIC) for connectivity. Subsequently, standard Ethernet networks and HW may be used. Of course, having the necessary bandwidth for the additional overhead is imperative. With the ability to use standard Ethernet HW, the cost of entry for NVMeo TCP/IP is less than with FC and RDMA. This allows many existing customers access to new storage technologies like NVMeoF.

What's New in vSphere 7 Core Storage | VMware (2)

If you're wondering which protocol performs best, a better question is what does your application requires. Also, what is the current configuration or your environment? There can be additional costs for FC or RDMA, where TCP can usually use existing HW. In general,RDMA will be the highest performing, then FC, and finally TCP. If you need the absolutely lowest latency and highest performance, RDMA is the way to go. If you're interested in NVMeoF and without adding additional costs, TCP maybe your best option. Of course if you already have an FC environment, that would be the most obvious choice.

What's New in vSphere 7 Core Storage | VMware (3)

With the addition of NVMe/TCP we have also added storage vmknic tagging in NIOC for NVMe/TCP. This allows customers to tune network resources for the NVMe/TCP to ensure enough bandwidth is allocated. When NIOC is enabled, distributed switch traffic is divided into the following predefined network resource pools:

  • Fault Tolerance traffic
  • iSCSI traffic
  • vMotion traffic
  • management traffic
  • vSphere Replication (VR) traffic
  • NFS traffic
  • virtual machine traffic
  • vSphere Data Protection traffic
  • backup NFC traffic
  • Newly added: NVMe/TCP traffic

What's New in vSphere 7 Core Storage | VMware (4)

Host Scale support increase for NFS and VMFS

What's New in vSphere 7 Core Storage | VMware (5)

Many larger enterprises, service providers, and cloud deployments often reach the vSphere limit of 64 hosts per VMFS or NFS datastore. With the release of update 3,we have increased the number of hosts that may connect to a VMFS-6 or NFS datastore from 64 to 128. This will alleviate the need for special approval for a larger number of hosts accessing VMFS or NFS datastores. Note: This is not a hosts-per-cluster increase; this is a number of hosts that can access a single VMFS or NFS datastore.

Affinity 3.0 Improvements for CNS

What's New in vSphere 7 Core Storage | VMware (6)

In vSphere 7, VMware updated the Affinity Manager, which handles first writes with thin or lazy thick provision. The new Affinity Manager, 2.0, maintains a map of all free storage Resource Cluster. Resource Clusters are available space for new writes, which enables quicker first writes.

In U3, we add additional enhancements to Affinity 3.0 which now adds support for CNS persistent volumes or FCDs (First Class Disks). We have also added support for the higher number of vSphere hosts per cluster.

vVols Batch Snapshots

What's New in vSphere 7 Core Storage | VMware (7)

With the potential scale vVols offers, ensuring operational efficiency is key. As engineering continues to enhance and develop vVols, we have enhanced the procedure for processing large numbers of vVol snapshots by making snapshot operations into a batch process. By grouping large amounts of snapshot operations, we reduce the serialize actions used for snapshots making the process more efficient and reducing the effect on the VMs and storage environment.

vSphere 7.0 U2

Core Storage features and enhancements for vSphere 7 Update 2.

iSCSI Path Limit increase

One of the enhancements from the vSphere 7 Update 2 release I’m sure many customers will be thrilled about is the iSCSI path limit increase. Until this release, the iSCSI path limit was 8 paths per LUN, and many customers end up going over this. Whether it’s from multiple VMKernels or targets, customers often ended up with 16 or 24 paths. I’m excited to announce that with the vSphere 7.0 U2, the new iSCSI path limit is now 32 paths per LUN.

RDM support for RHEL HA

There were a few changes that needed to be made to enable support for Red Hat Enterprise HA to be able to use RDMs in vSphere. With the release of vSphere 7.0 U2, RHEL HA is now supported on RDMs.

VMFS SESparse Snapshot Improvements

Read performance improvements by using a technique for directing the reads to where the data resides rather than traversing the delta disk snapshot chain every time. Previously, if a read came into the Virtual Machine and the VM had snapshots, the reads would traverse the snapshot chain then to the base disk. Now, when a read comes in a filter will direct the read to either the snapshot chain of base disk reducing the read latency.

What's New in vSphere 7 Core Storage | VMware (8)

Multiple Paravirtual RDMA (PVRDMA) adapter support

In vSphere 6.7, we announced support for RDMA in vSphere. One of the limitations was only a single PVRDMA adapter was supported per Virtual Machine. With the release of vSphere 7.0 U2, we now support multiple PVRDMA adapters per VM.

Performance Improvements on VMFS

With the release of vSphere 7.0 U2, we have made performance improvements to VMFS. The performance was improved for first writes on thin disks. These changes improve performance for backup and restore, copy operations, and Storage vMotion in certain instances. With this improvement and the enhancements with Affinity 2.0 in vSphere 7, the first write impact has further been reduced. These improvements help to reduce the potential effects of first writes when using thin-provisioned disks.

NFS Improvements

NFS required a clone to be created first for a newly created VM and the subsequent snapshots would be offloaded to the array. With the release of vSphere 7.0 U2, we have enabled NFS array snapshots of full, non-cloned VMs to not use redo logs but instead, use the snapshot technology of the NFS array in order to provide better snapshot performance. The improvement here will remove the requirement/limitation of creating a clone and enables the first snapshot also to be offloaded to the array.

HPP Fast Path Support for Fabric Devices

The High Performance Plugin (HPP) is a new Multi-Pathing Plugin (MPP) for ESXi that VMware has developed for very fast devices. HPP is a leaner MPP than NMP, but achieves some of this by dropping support for sub-plugins like Storage Array Type Plugins (SATPs) and Path Selection Plugins (PSPs). HPP started shipping with ESXi 6.7 but was not the default plugin for any devices and only supported single-pathed local devices. This made it relevant for special NVMe-PCIe based use cases where it could reach much higher IOPS than the ESXi Native Multi-Pathing Plugin (NMP).

With the release of vSphere 7.0 U2, HPP is now the default plugin for NVMe devices. The plugin comes with 2 options – SlowPath with legacy behavior, VM fairness capabilities, and the newly added FastPath, designed to provide better performance as compared to SlowPath with some restrictions. Even in SlowPath mode HPP can often perform better than NMP for the same device due to IOs being handled in batch mode by helping to reduce lock contention and CPU overhead in the IO path. There are some limitations to when FastPath will apply, so it is mostly intended for limited use cases. The FastPath is enabled by setting a Latency Sensitive Threshold, which is the threshold below which we allow operation of FastPath. Once the device latency goes above the threshold we will move to SlowPath and thus ensure that fairness is respected when latency has a higher impact.

To learn see VMware Docs article Set Latency Sensitive Threshold.

HPP as the default plugin for vSAN

With the release of vSphere 7.0 U2, HPP is now the default MPP for all devices (SAS/SATA/NVMe) used with vSAN. Note that HPP is also the default plugin for NVMe fabric devices. This is an infrastructure improvement to ensure vSAN uses the improved storage plugin and can take advantage of the improvements.

VOMA improvements

vSphere On-disk Metadata Analyzer (VOMA) is used to identify and fix metadata corruption affecting the file system or underlying logical volumes. With the release of vSphere 7.0 U2, VOMA support has now been enabled for spanned VMFS volumes.

For more information on VOMA,

VMware Docs articleChecking Metadata Consistency with VOMA

VMware KB articleUsing vSphere On-disk Metadata Analyzer (VOMA) to check VMFS metadata consistency (2036767)

vVols

vVols Enhancements and Updates

Support for Higher Queue Depth with vVols Protocol Endpoints

In some cases, the Disk.SchedNumReqOutstanding (DSNRO) configuration parameter did not match the queue depth of the vVols Protocol Endpoint (PE) (VVolPESNRO). With the release of vSphere 7.0 U2, the default QD for the PE will now be 256 or the maxQueueDepth of the exposed LUN. Subsequently, the default minimum PE QD is now 256.

VMware KB article:Changing the queue depth for QLogic, Emulex, and Brocade HBAs (1267)

Create larger than 4GB Config vVol

This allows the Config vVol to be larger than the default 4GB for partners to be able to store images for automatic builds. Some of our partners needed to be able to store images and build files in the Config vVol which was previously limited to 4GB. Now the Config vVol can be increased similar to the Data vVol.

vVols with CNS and Tanzu

vVols with CNS and Tanzu

SPBM Multiple Snapshot Rule Enhancements

With vVols, Storage Policy-Based Management (SPBM) gives the VI admin autonomy to manage storage capabilities, at a VM level, via policy. With the release of vSphere 7.0 U2, we have enabled our vVols partners to support multiple snapshot rules in a single SPBM storage policy. This feature will need to be supported in the respective VASA providers that enable snapshot policies to be constructed. When supported by our vVols partners, it will be possible to have a single policy with multiple rules with different snapshot intervals.

32 Snapshot support for Cloud Native Storage (CNS) for First Class Disks

Persistent Volumes (PV) are created in vSphere as First-Class Disks (FCD). FCDs are independent disks with no VM attached. With the release of vSphere 7.0 U2, we are adding snapshot support of up to 32 snapshots for FCDs. This enables you to create snapshots of your K8s PVs which goes along with the SPBM multiple snapshot rules.

CNS PV to vVol mapping

In some cases, customers may want to see which vVols is associated with which CNS Persistent Volume (PV). With the release of vSphere 7.0 U2 in the CNS UI, you can now see a mapping of the PV to its corresponding vVol FCD.

What's New in vSphere 7 Core Storage | VMware (9)

vSphere 7.0 U1

Features and enhancements for vSphere 7.0 Update 1

VMFS Enhancements

  • SESparse Snapshot consolidation bloat reduction.

    • We have optimized the SESparse snapshot process reducing bloat. When using thin VMDKs, there can be an increase in disk usage when consolidating vSphere snapshots and unmapping the deleted data. We have enhanced the process by optimizing the consolidation process.
  • Reduced vSphere Snapshot stun time.

    • We have optimized the snapshot process reducing to help reduce stun time during snapshot creation and deletion. By updating the way the Affinity Manager updates Resource Clusters (RC), we have reduced snapshot creation and deletion stun times. We have also enhanced the reporting of the snapshot consolidation progress.

NFS Enhancements

  • NAS VAAI Plugins

    • Previously, the installation of NAS VAAI plugins required a reboot of the ESXi host. In vSphere 7 Update 1, we have enabled the ability to install VAAI NAS plugins without requiring a reboot.
  • NVDK cloning of LZT disk could fail with unsupported disk type. We have updated the code to support all disk types.

RDM

  • pRDM extension with Microsoft WSFC

    • Onc function RDM can currently provide over other shared disk applications is being able to hot extend shared disks. In vSphere 7 Update 1, we have validated the support for online disk/LUN expansion for pass-through RDM used with Windows Server Failover Clustering (WSFC).

NVMeoF

NVMeoF Support for Oracle RAC

There are numerous customers using clustered applications; Oracle RAC for example. As NVMeoF continues to gain support, especially for database instances, we want to ensure we validate the various deployments.

  • With vSphere 7 Update 1, we have extended support for Oracle RAC using NVMeoF targets.

vVols as Principal Storage in VCF 4.1

Support for Virtual Volumes (vVols) as principal storage in VMware Cloud Foundation 4.1

With the release of vSphere 7.0 Update 1, there were also updates to Cound Foundation and Tanzu.

VMware Cloud Foundation 4.1

  • In VCF 4.1, we have added support for vVols as principal storage in workload domains.

What's New in vSphere 7 Core Storage | VMware (10)

With VMware Cloud Foundation (VCF), your management domain requires vSAN, which can easily be managed using policy-based management or SPBM. SPBM allows simplified operational management of your storage capabilities. Although you can use tag-based policies with external storage for VCF, it is not something that scales easily and requires quite a bit of manual operations. When you think about the possible scale VCF enables, manually tagging datastores can become daunting. Subsequently, being able to programmatically manage all your VCF storage simplifies your operations, freeing valuable time for other tasks.

In VCF 4.0 we supported vSAN, NFS, and VMFS on FC for Principal Storage for newly created Workload Domain clusters. VCF 4.1 expands principal storage options by adding support for vVols. vVols was previously supported for Supplemental Storage only. The big difference between the two isprincipal storage is the initial storage option selected when creating new clusters in VCF, the setup of principal storage is automated through VCF workflows. Supplemental storage is added to a cluster manually through the vSphere Web UI after it has been created. With vVols, numerous benefits may now be utilized in VCF and, you can use the same SPBM management plane for your vSAN and external arrays. vVols enables you to use all of your array’s capabilities such as array-based snapshots, cloning, and replication. All managed via policies, on a single vVols datastore, with VM granularity.

What's New in vSphere 7 Core Storage | VMware (11)

Setting up vVols as principal storage in VCF 4.1

The VCF engineering team has been diligently working internally and with our storage partners to enable vVols as principal storage. With the 4.1 release, we support NFS 3.x, FC, and limited iSCSI protocols for vVols. For iSCSI, there are a few pre-tasks that must be completed. Setting up your SW iSCSI initiator on all your hosts in the new WLD, and your VASA must be listed as a Dynamic Target.

What's New in vSphere 7 Core Storage | VMware (12)

The VASA registration has been enabled outside the workflow in the event incorrect VASA details are entered. This way, the workflow doesn’t fail, allowing you to update incorrect information for the VASA registration.

What's New in vSphere 7 Core Storage | VMware (13)

Once you get to the storage for the new WLD, you can enter the details for the vVols datastore.

What's New in vSphere 7 Core Storage | VMware (14)

After going through the rest of the details for the new WLD, you will see we now have vVols storage.

What's New in vSphere 7 Core Storage | VMware (15)

With the WLD build completed, you can then go into your hosts, and you will see a vVols datastore connected to the hosts.

What's New in vSphere 7 Core Storage | VMware (16)

As a default, an SPBM policy, “vVols No Requirement Policy,” is created. We do not create any other SPBM policies because there are too many variables between array types and customer requirements. There is no way to generate an advanced policy, tailored to the requirements needed, without input from the customer. This allows the customer to create specific and tailored SPBM policies that meet application, organization, or security requirements.

vVols continues to be developed for many of VMware’s products and our partners are also continue to enable more and more features for vVols. To learn more about vVols or VCF, head over to thevVolsorVCFpages at core.vmware.com.

To learn more about VCF and vVols, make sure to attend Todd Simmons and my VMworld session.
VCF and vVols: Empower Your External Storage [HCI2270]

Be sure to check out the VCF announcement blog.
What’s New with VMware Cloud Foundation 4.1

More VMworld sessions

vSphere 7 U1 storage related blogs

vSphere 7.0

Core Storage features and enhancements included in vSphere 7.0

NVMeoF

NVMeoF Insight

NVMe continues to become more and more popular because of its low latency and high throughput. Industries, such as Artificial Intelligence, Machine Learning, and IT, continue to advance, and the need for increased performance continues to grow. Typically, NVMe devices are local using the PCIe bus. So how can you take advantage of NVMe devices in an external array? The industry has been advancing external connectivity options using NVMe over Fabrics (NVMeoF). Connectivity can be either IP or FC based. There are some requirements for external connectivity to maintain the performance benefits of NVMe as typical connectivity is not fast enough.

VMware NVMeoF

In vSphere 7, VMware added support for shared NVMe storage using NVMeoF. For external connectivity, NVMe over Fibre Channel and NVMe over RDMA (RoCE v2) are supported.

With NVMeoF, targets are presented as namespaces, which is equivalent to SCSI LUNs, to a host in Active/Active or Asymmetrical Access modes. This enables ESXi hosts to discover, and use the presented NVMe namespaces. ESXi emulates NVMeoF targets as SCSI targets internally and presents them as active/active SCSI targets or implicit SCSI ALUA targets.

NVMe over Fibre Channel

What's New in vSphere 7 Core Storage | VMware (17)

This technology maps NVMe onto the FC protocol enabling the transfer of data and commands between a host computer and a target storage device. This transport requires an FC infrastructure that supports NVMe.

To enable and access NVMe over FC storage, install an FC adapter supporting NVMe in your ESXi host. There is no configuration required for the adapter; it will automatically connect to an appropriate NVMe subsystem and discovers all shared NVMe storage devices. You may, at a later time, reconfigure the adapter and disconnect its controllers or connect other controllers.

NVMe over FC Requirements

  • NVMe array supporting FC
  • Compatible vSphere 7 ESXi host
  • HW NVMe adapter (HBA supporting NVMe)
  • NVMe controller

NVMeoF over RDMA

What's New in vSphere 7 Core Storage | VMware (18)

This technology uses Remote Direct Memory Access (RDMA) transport between two systems on the network. The transport enables in memory data exchange, bypassing the operating system or processor of either system. ESXi supports RDMA over Converged Ethernet v2 (RoCE v2).

To enable and access NVMe storage using RDMA, the ESXi host uses an RNIC adapter on your host and a SW NVMe over RDMA storage adapter. You must configure both adapters to use them for NVMe storage discovery.

NVMe over RDMA requirements:

  • NVMe array supporting RDMA (RoCE v2) transport
  • Compatible ESXi host
  • Ethernet switches supporting a lossless network.
  • Network adapter supporting RoCE v2
  • SW NVMe over RDMA adapter
  • NVMe controller
  • Network Requirements for RDMA over Converged Ethernet

NVMeoF Setup Prerequisites

When setting up NVMeoF, there are a few practices that should be followed.

  • Do not mix transport types to access the same namespace.
  • Ensure all active paths are presented to the host.
  • NMP is not used/support; instead, HPP (High-Performance Plugin) is used for NVMe targets.
  • You must have dedicated links, VMkernels, and RDMA adapters to your NVMe targets.
  • Dedicated layer 3 VLAN or layer 2 connectivity
  • Limits:
    • Namespaces-32
    • Paths=128 (max 4 paths/namespace on a host)

Clustered VMDK

Shared VMDK or Clustered VMDK in VMFS6

In vSphere 7, VMware added support for SCSI-3 Persistent Reservations (SCSI-3 PR) at the virtual disk (VMDK) level. What does this mean? You now have the ability to deploy a Windows Server Failover Cluster (WSFC), using shared disks, on VMFS. This is yet another move to reduce the requirement of RDMs for clustered systems. With supported HW, you may now enable support for clustered virtual disks (VMDK) on a specific datastore. Allowing you to migrate off your RDMs to VMFS and regain much of the virtualization benefits lost with RDMs.

What's New in vSphere 7 Core Storage | VMware (19)

Clustered/Shared VMDKs on VMFS6 Prerequisites

  • Your array must support ATS, SCSI-3 PR type Write Exclusive-All Registrant (WEAR).
  • Only supported with arrays using Fibre Channel (FC) for connectivity.
  • Only VMFS6 datastores.
  • Storage devices can be claimed by NMP or any other third-party (non-VMware) plugins (MPPs). But please check with the vendor regarding the support for Shared VMDK before using their plugin (MPP).
  • VMDKs must be Eager Zeroed Thick (EZT) Provisioned.
  • Clustered VMDKs must be attached to a virtual SCSI controller with bus sharing set to “physical.”
  • A DRS anti-affinity rule is required to ensure VMs, nodes of a WSFC, run on separate hosts.
  • Change/increase the WSFC Parameter "QuorumArbitrationTimeMax" to 60.

Other Caveats

  • Windows Server 2012R2/2016/2019. SQL Server 2016/2017 was used to validate the configuration.
  • The boot disk (and all non-shared disks) should be attached to a separate virtual SCSI controller with bus sharing set to “none.”
  • Mixing clustered and non-shared disks on a single virtual SCSI controller is not supported.
  • The datastore cannot be expanded or span multiple extents.
  • All hosts and vCenter must be vSphere 7 or above
  • A mix of clustered VMDKs and other disk types (vVols, RDMs) are not supported
  • Limits:
    • Support for up to 5 WSFC nodes (Same as RDMs)
    • 128 clustered VMDKs per host
  • Only Cluster across Box (CaB) is supported, Cluster in a Box (CiB) is not supported.

vSphere Features

  • Supported Features:
    • vMotion to supported hosts meeting the same requirements.
  • Unsupported Features:
    • Snapshots, cloning, and Storage vMotion
    • Fault Tolerance (FT)
    • Hot change to VM HW or hot expansion of clustered disks

Here’s a link to VMware’s document on Microsoft Clusters.

Enabling Clustered VMDK

When you navigate to your supported datastore, under the Configure tab, you will see a new option to enable Clustered VMDK. If you are going to migrate or deploy a Microsoft WSFC cluster using shared disks, then you would enable this feature. Once the feature is enabled, you can then follow the Setup for Windows Server Failover Clustering documentation to deploy your WSFC on the VMFS6 datastore.

Demo and details on migrating WSFC using RDMs to Shared VMDK on VMFS

vSphere 7 WSFC RDM to Shared VMDK Migration

Enabling Clustered VMDK

Enabling Shared VMDK support on VMFS Datastores

When you navigate to your supported datastore, under the Configure tab, you will see a new option to enable Clustered VMDK. If you are going to migrate or deploy a Microsoft WSFC cluster using shared disks, then you would enable this feature. Once the feature is enabled, you can then follow the Setup for Windows Server Failover Clustering documentation to deploy your WSFC on the VMFS6 datastore.

What's New in vSphere 7 Core Storage | VMware (20)

What's New in vSphere 7 Core Storage | VMware (21)

Perennially Reserved Flag for RDMs

Using the Perennially Reserved Flag for WSFC RDMs

In cases where customers are using numerous pRDMs in their environment, host boot times or storage rescans can take a long time. The reason for the longer scan times is each LUN attached to a host is scanned at boot or during a storage rescan. Typically, RDMs are provisioned to VMs for Microsoft WSFC and are not directly used by the host. During the scan, ESXi attempts to read the partitions on all the disks but it is unable to for devices persistently reserved by the WSFC. Subsequently, the longer it can take a host to boot or rescan storage. The WSFC uses SCSI-3 persistent reservation to control locking between the nodes of the WSFC which, blocks the hosts from being able to read them.

VMware recommends implementing perennial reservations for all ESXi hosts hosting VM nodes with pRDMs. Check the followingkb1016106or more details

The question then arises; How can you get the host not to scan these RDMs and reduce boot or rescan times? I’m glad you asked!

There is a device flag called “Perennially Reserved” which tells the host the RDM should not be scanned and is used elsewhere (perennially) in the environment. Before vSphere 7, this flag is enabled via CLI and requires the UUID (naa.ID).

The command to set the flag to true:
esxcli storage core device setconfig -d naa.id --perennially-reserved=true

To verify the setting:
esxcli storage core device list -d naa.id

In the device list you should see:
Is Perennially Reserved: true

When setting this option, it must be run for each relevant RDM used by the WSFC and on every host with access to that RDM. You can also set the Perennially Reserved flag in Host Profiles.

With the release of vSphere 7, setting the Perennially Reserved flag to true was added to the UI under storage devices. There has also been a field added to show the current setting for the Perennially Reserved flag.

What's New in vSphere 7 Core Storage | VMware (22)

Once you select the RDM to be Perennially Reserved, you have the option to “Mark the selected device as perennially reserved” for the current host or multiple hosts in the cluster. This eliminates the manual process of setting the option per host via CLI. If preferred, you can still use ESXCLI, PowerCLI, or Host Profiles.

What's New in vSphere 7 Core Storage | VMware (23)

Once you click YES, the flag will be set to true and the UI updated.

What's New in vSphere 7 Core Storage | VMware (24)

You may also unmark the device using the same process.

What's New in vSphere 7 Core Storage | VMware (25)

Short Clip showing process:

What's New in vSphere 7 Core Storage | VMware (26)

Setting the Perennially Reserved flag on your pRDMs used by your WSFC is recommend in the clustering guides. When set, ESXi no longer tries to scan the devices and this can reduce boot and storage rescan times. I have added links below to resources on clustering guides and the use of this flag. Another benefit of flagging RDMs is you can easily see which devices are RDMs and which are not.

Resources:

  • ESXi/ESX hosts with visibility to RDM LUNs being used by MSCS/WSFC nodes with RDMs may take a long time to start or during LUN rescan (1016106)
  • Guidelines for Microsoft Clustering on vSphere (1037959)
  • Microsoft Windows Server Failover Clustering (WSFC) with shared disks on VMware vSphere 7.x: Guidelines for supported configurations (79616)
  • Microsoft Windows Server Failover Clustering (WSFC) with shared disks on VMware vSphere 6.x: Guidelines for supported configurations (2147661)

Affinity 2.0

VMFS Affinity Manager 2.0

Overview

Some of the benefits of deploying VMs as Thin Provisioned VMDKs are the effective use of space, and space reclamation. A thin VMDK is a file on VMFS, where Small File Blocks (SFB) are allocated on-demand at the time of first write IO. There can be an overhead cost to this process, which can affect performance. In some cases, for maximum performance, it is recommended Eager Zero Thick (EZT) disks are utilized to avoid the overhead of allocating space for new data.

First Write Process

In VMFS, resources are organized, on-disk, in groups called “Resource Clusters” (RC). When a file requires new storage resources, (new data to be written), there are two decisions made.

1. Which RC to fetch the resources.

2. Which resources from the RC to allocate.

The “Affinity Manager” is a component of VMFS6 responsible for RC allocation. The Affinity Manager allocates RCs, particularly Small File Block Cluster (SFBC), such that minimal files share the same RC. The process of mapping a resource to an RC is called affinitizing. VMFS is shared storage, and multiple hosts can request resources in parallel. Resource allocations are synchronized using an on-disk lock (ATS) with no other communication between hosts. This can lead to issues during allocation. The Resource Manager requests RCs from the Affinity manager, who then responds to the VMFS file layer.

What's New in vSphere 7 Core Storage | VMware (27)

New Affinity 2.0 Manager

With vSphere 7, we are introducing Affinity 2.0, which is designed to be smarter and more efficient in the allocation of resources, minimizing overhead in the IO path. This is accomplished by creating a cluster-wide view of the resource allocation status of the RCs, dividing disk space into “regions.” Then ranking them based on the number of blocks allocated by the current host as well as other hosts. This information is called a “Region Map.” With the Region Map, the Affinity Manager can now quickly supply information on available RCs to the Resource Manager.

What's New in vSphere 7 Core Storage | VMware (28)

Benefit

What does all this mean? When the Resource Manager requests RCs from the new Affinity Manager, when a first write IO occurs, the allocations are no longer a “go and find” an RC to allocate avoiding the back and forth overhead. With the Region Map, the Affinity Manager knows where and what RC resources are available and can quickly direct the Resource Manager. The Resource request, from the VMFS file layer, now goes directly to the Affinity Manager. The Affinity Manager relies on the generated Region Map to find an available RC to allocate. The Affinity Manager then locks the RC on-disk, checks for free resources, and hands it over to the Resource Manager. This reduces the repeated back and forth between the Affinity Manager and Resource Managers trying to find an available RC. Therefore, reducing metadata IO and the overhead required for the first write on thin-provisioned disks. It can also improve the allocation of space for EZT or LZT provisioning.

End Of Availability (EOA) vFRC and CBRC 1.0

With the release of vSphere 7, there are a few antiquated features that have reached End of Availability.

vSphere Flash Read Cache (vFRC) EOL

vFRC currently has a minimal customer base. With VAIO, it allows 3rd party vendors to create custom caching solutions. When you upgrade to vSphere 7, you will receive a warning message that vFRC will no longer be available. "vFRC will be gone with this upgrade, please deactivate vFRC on a VM if using it."

What's New in vSphere 7 Core Storage | VMware (29)

Content-Based Read Cache (CBRC) 1.0 EOA

CBRC 1.0 has a maximum cache size of 2GB, whereas 2.0 has a maximum of 32GB. As of vSphere 6.5, CBRC 2.0 is the default for Content-Based Read Cache. Starting in vSphere 7, CBRC 1.0 has been removed to ensure it is not used, especially in Horizon environments. This will also eliminate the building and compiling of un-used code.

vVols Interoperability

vVols Interoperability withVMware Products

VMware Virtual Volumes (vVols) adoption continues to grow and is accelerating in 2020, and it’s easy to see why. vVols eliminates LUN management, accelerates deployments, simplifies operations, and enables utilization of all of your array’s functionality. VMware and our storage partners continue to develop and advance vVols, and its functionality. In vSphere 7, more features and enhancements have been added, showing the continued commitment to the program.

vVols Support in SRM 8.3

Site Recovery Manager - SRM

Because vVols uses array-based replication, it is very efficient. Array-based replication is a preferred method of replicating data between arrays. With vVols and SPBM, you can easily manage which VMs are replicated rather than everything in a volume or LUN. With the release of Site Recovery Manager 8.3, you can now manage your DR process with SRM while using the replication efficiency and granularity of vVols and SPBM.

Here’s a link to the announcement blog for SRM 8.3

What's New in vSphere 7 Core Storage | VMware (30)

With vVols and SRM, you can have independent vVols replication-groups/SRM protection-groups for a single VM, application, or group of VMs. Another benefit is each replication-group/protection-group can have different RPOs, and all use array-based replication.

What's New in vSphere 7 Core Storage | VMware (31)

vVols Support for CNS

Cloud-Native Storage - CNS

Kubernetes continues to grow in adoption, and VMware is on the forefront. One of K8s’ requirements is persistent storage, and until now, that included vSAN, NFS, and VMFS. The thing is vVols couldn’t be more suited for K8s storage because a vVol is its own entity. Now, deploy an FCD as a vVol and you’ve got a first-class disk as a first-class citizen that has additional benefits like mobility and CSI to SPBM policy mapping. With the initial release, snapshots and replication with vVols will not be supported.

What's New in vSphere 7 Core Storage | VMware (32)

vVols Support in vRealize Operations 8.1

vRealize Operations - vROps

A feature that has been requested for a while is finally available; support for vVols datastores in vROps! With the release of vROps 8.1, you can now utilize vROps monitoring on your vVols datastores the same as any other datastore. Giving your alerting, planning, troubleshooting, and more for your vVols datastores. For more information here's the link tovROps.
Make sure to read about the new release on thevROps 8.1 announcement blog.

What's New in vSphere 7 Core Storage | VMware (33)

vVols as Supplemental Storage in VCF

VMware Cloud Foundation - VCF

VMware Cloud Foundation allows organizations to deploy and manage their private and public clouds. VCF currently supports vSAN, VMFS, and NFS principal storage. Customers are asking for the support of vVols as principle storage, and while the VCF team continues to evaluate and develop that option, it is not available. In the meantime, vVols can be used as supplemental storage after the Workload Domain build has completed. Support for vVols as supplemental storage is a partner supported option.

Please work with your storage array vendor for the supported processes and procedures in setting up vVols with VCF as supplemental storage.

For more information, here’s the link to VCF.

Here’s a link to the blog on What’s New in VCF 4.0

What's New in vSphere 7 Core Storage | VMware (34)

Associated Content

What's New in vSphere 7 Core Storage | VMware (35) From the action bar MORE button.

Log in to enable this section.

Filter Tags

Storage ESXi 7 Site Recovery Site Recovery Manager Site Recovery Manager 8 vSphere vSphere 7 Cloud Native Storage iSCSI Kubernetes NFS Paravirtual RDMA (PVRDMA) Snapshots VAAI Virtual Volumes (vVols) VMFS Document What's New Overview Intermediate Design

What's New in vSphere 7 Core Storage | VMware (2024)

FAQs

What's New in vSphere 7 Core Storage | VMware? ›

In vSphere 7, VMware

VMware
VMware LLC is an American cloud computing and virtualization technology company headquartered in Palo Alto, California. VMware was the first commercially successful company to virtualize the x86 architecture. Palo Alto, California, U.S. Palo Alto, California, U.S.
https://en.wikipedia.org › wiki › VMware
added support for shared NVMe storage using NVMeoF. For external connectivity, NVMe over Fibre Channel and NVMe over RDMA (RoCE v2) are supported. With NVMeoF, targets are presented as namespaces, which is equivalent to SCSI LUNs, to a host in Active/Active or Asymmetrical Access modes.

What is the new feature in vSphere 7? ›

Improved Clustering features

VMware DRS cluster (Distributed Resource Scheduler) has been improved in vSphere 7. Now DRS can ensure load balancing for both VMs and containers. In vSphere 6.7, the DRS checks a load of each ESXi host in a cluster.

What is new in vSphere 7 update 1? ›

With vSphere 7 Update 1, the Monster VM can scale up to 24TB memory and support up to 768 vCPUs, leaving other hypervisor vendors far behind in the category. Monster VMs support the large scale up environments by offering the full resources of a host to individual VMs.

What is new in vSphere 7.0 U3? ›

VMware has added support for dedicated witness host appliances for vSAN topologies in vSphere 7 U3. vSphere Lifecycle Manager (vLCM) now manages standalone vSAN witness nodes. You can use vLCM to patch the witness node and save time when patching vSAN clusters.

What is the minimum storage for ESXi 7? ›

ESXi 7.0 requires a boot disk of at least 32 GB of persistent storage such as HDD, SSD, or NVMe. Use USB, SD and non-USB flash media devices only for ESXi boot bank partitions. A boot device must not be shared between ESXi hosts.

What is the difference between vSphere 7 and 8? ›

Some limits have been increased in VMware vSphere 8 compared to VMware vSphere 7 U3: The number of vGPU devices is increased to 8. The number of ESXi hosts that can be managed by Lifecycle Manager is increased from 400 to 1,000. VMs per cluster is increased from 8,000 to 10,000.

What is VMware vSphere 7 essentials? ›

VMware vSphere Essentials Kits are for small businesses and combine virtualization for up to three physical servers with centralized management using VMware vCenter Server for Essentials.

Which of the following benefits are offered in the latest update of vSphere? ›

vSphere unleashes key innovations to help customers supercharge workload performance, accelerate innovation for DevOps teams, improve operational efficiency and IT productivity, and bring the benefits of the cloud to their on-premises infrastructure.

Why upgrade to vSphere 7? ›

VMware vSphere 7 delivers key capabilities to enable IT organizations to address key trends that are placing demands on IT infrastructure: Explosive growth in quantity and variety of applications, from business-critical apps to new intelligent workloads. Rapid growth of hybrid cloud environments and use cases.

What are the DRS changes in vSphere 7? ›

The New DRS

It computes a VM DRS score on each host and moves the VM to the host that provides the highest VM DRS score. The biggest change from the old DRS version is that it no longer balances host load directly.

What version of VMFS is in vSphere 7? ›

VMFS 6 was released in vSphere 6.5 and is used in vSphere 6.7, vSphere 7.0, and newer versions such as vSphere 8.

Do I need a new license for vSphere 7? ›

After you upgrade to ESXi 7.0, you must apply a vSphere 7 license. If you upgrade an ESXi host to a version that starts with the same number, you do not need to replace the existing license with a new one. For example, if you upgrade a host from ESXi 6.5 to 6.7, you can use the same license for the host.

Is vSphere 7 an EOL? ›

Yes. vSphere Platinum, VMware vCloud Suite® Platinum and VMware Cloud Foundation Platinum end of availability (EOA) is being announced as part of vSphere 7.

How many cores does ESXi require? ›

ESXi 8.0 requires a host with at least two CPU cores.

What is the maximum memory a VM in ESXi 7.0 can have? ›

Change the Memory Configuration
Introduced in Host VersionVirtual Machine CompatibilityMaximum Memory Size
ESXi 7.0ESXi 7.0 and later6128 GB
ESXi 6.7 Update 2ESXi 6.7 Update 2 and later6128 GB
ESXi 6.7ESXi 6.7 and later6128 GB
ESXi 6.5ESXi 6.5 and later6128 GB
7 more rows
Aug 14, 2023

What is the datastore size limit vSphere 7? ›

7.0GA supports the limit of 2^64. We support 1024 datastores shared within a cluster of maximum 32 ESXi hosts. The sum of static targets (manually assigned IP addresses) and dynamic targets (IP addresses assigned to discovered targets) may not exceed this number. 256 NFS3 datastores per host as well as 256 NFS4.

What's new in vSphere 7.0 u2? ›

vSphere 7.0 Update 2 introduces a new Native Key Provider that is fully embedded within vCenter and clustered ESXi hosts. It allows out-of-the-box, data-at-rest-encryption and can leverage hardware TPM. Its implementation is quick and easy, while improving the overall security of the environment.

What are the features of DRS in vSphere 7? ›

With vSphere 7, DRS evaluates virtual machine performance instead of the cluster, and also has more granular checks, looking at metrics like CPU Ready Time and memory ballooning. Now, if DRS can provide better performance to a VM on another ESXi host, it will move it or make a recommendation for it to be moved.

Top Articles
Latest Posts
Article information

Author: Greg O'Connell

Last Updated:

Views: 6407

Rating: 4.1 / 5 (62 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: Greg O'Connell

Birthday: 1992-01-10

Address: Suite 517 2436 Jefferey Pass, Shanitaside, UT 27519

Phone: +2614651609714

Job: Education Developer

Hobby: Cooking, Gambling, Pottery, Shooting, Baseball, Singing, Snowboarding

Introduction: My name is Greg O'Connell, I am a delightful, colorful, talented, kind, lively, modern, tender person who loves writing and wants to share my knowledge and understanding with you.