Total Pageviews

Showing posts with label VMWare. Show all posts
Showing posts with label VMWare. Show all posts

Monday, September 4, 2023

I/O bottleneck on a VMWare VM for SQL Server: The mission impossible

When there is a performance problem on a SQL Server, we normally rush to treat the symptoms that are impacting the performance. While some measures can definitely help alleviate current issues temporarily, the performance issues will tend to come back again with different symptoms and in a different form. When the underlying infrastructure is suboptimal or if it has any misconfigurations, the issues will continue to persist forever. 

I/O bottleneck and SQL Server symptoms: SQL Server is an I/O intensive application and as a result, the I/O subsystem requires most appropriate and optimal configurations to be able to handle the demanding workload.  If the configurations are suboptimal, then we will end up observing some combination of the following symptoms on a SQL Server indicating I/O performance issues:

  • Disk latency and I/O throughput (Average Disk Queue Length, Disk Sec/Transfer, IOPS)
  • I/O wait types (IO_COMPLETION, ASYNC_IO_COMPLETION, PAGEIOLATCH_**, WRITELOG)
  • tempdb contention (LATCH_**), either sporadic or long duration
  • High CPU usages and increased Processor Queue Length
  • Slow query execution and decreasing query response time
  • Concurrency problem and application time-out
  • Observing lock contention (LCK_M_**) and SQL blocking
  • Memory pressure (RESOURCE_SEMAPHORE), Swapping and Paging activities
  • Reduction of Network Throughput (ASYNC_NETWORK_IO, MB/Sec, Packet/Sec)

A famous I/O alert from Storage system to SQL Server: DBA who are managing a SQL Server on a physical or virtual environment are familiar with the following I/O alert. This alert arises due to limitation of SAN’s queue depth or misconfiguration of the VM and VMDK file.

 SQL Server has encountered 10 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [E:\ProdData\prod_data_04.ndf] in database id 6.  The OS file handle is 0x000000000000134C.  The offset of the latest long I/O is: 0x00003afe460000.  The duration of the long I/O is: 25274 ms.

SQL Server has encountered 22 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [T:\tempdb\tempdb_03.ndf] in database id 2.  The OS file handle is 0x00000000000010E8.  The offset of the latest long I/O is: 0x0000001d230000.

An I/O optimized VMWare VM: When building a virtual machine (VM) for SQL Server, we should give attention to the VMFS Data Store, Storage Controller, VMDK file and their configurations. The following are a few areas where everyone should place importance when designing I/O infrastructure for a SQL Server.

SCSI Controller and Paravirtualized SCSI Driver (PVSCSI): PVSCSI is the high-performance native driver for VMWare VM that is also the widely-recommended driver to use with all SQL Server deployments in order to improve I/O throughput, to lower I/O latency and reduce the number of CPU cycles consumed. Using a PVSCSI driver will improve I/O throughput by up to 12% and reduce CPU usage by up to 30%.

VMFS datastore and I/O isolation: SQL Server has two types of I/O patterns: Random and Sequential, with different block sizes varying from 512 bytes to 8MB. Due to this fact, the VMFS Data Store needs to be isolated based on the random and sequential I/O patterns in the storage system. Here is an example of a possible database isolation:

  • sql_windows_datastore: for Windows and SQL Server binary files.
  • sqldata_datastore_01: for data file – optimized for random I/O
  • sqldata_datastore_02: for Index, Columstore – optimized for sequential I/O
  • sqllog_datastore_01: for log file – optimized for sequential I/O
  • sqltempdb_datastore: for tempdb data and tempdb log file

SCSI Controller:  Currently, there are four commonly used Storage Controllers in ESXi for VM and each has different use cases:

  • LSI Logic Parallel - Legacy driver for backward compatibility with older Operating Systems.
  • LSI Logic SAS – This is the default option for a VM which will work in most Operating systems. 
  • VMWare Paravirtual – Paravirtualized SCSI controller developed to enhance performance in all recent Operating Systems that support the latest VMware Tools.
  • NVMe Controller – It is the preferred option if the underlying storage system is based on SSD and NVMe. However, this controller can be used regardless of the underlying storage system for a VM created on ESXi 6.5 and later.

Each VM can have a maximum of two IDE controllers, four SATA controllers, four SCSI controllers and four NVMe controllers and each storage controller can hold up to 15 VMDF files. While creating a VM for SQL Server, it is important to align each SCSI controller to the intended Data Store that was previously created in the VMFS datastore in ESXi. A good example for an I/O intensive workload would be the following: 

  • SCSI Controller 0, scsi(0:0): sql_windows_datastore
  • SCSI Controller 1, scsi(1:0): sqldata_datastore_01
  • SCSI Controller 1, scsi(1:1): sqldata_datastore_02
  • SCSI Controller 2, scsi(2:0): sqllog_datastore_01
  • SCSI Controller 2, scsi(2:1): sqltempdb_datastore, and so on.

It is not recommended for a SQL Server I/O operation to use a single SCSI Controller to hold all VMDK files. A bad example is the following:

  • SCSI Controller 0, scsi(0:0): sql_windows_datastore
  • SCSI Controller 0, scsi(0:1): sqldata_datastore_01
  • SCSI Controller 0, scsi(0:2): sqldata_datastore_02
  • SCSI Controller 0, scsi(0:3): sqllog_datastore_01
  • SCSI Controller 0, scsi(0:4): sqltempdb_datastore

The storage controller for the OS, Page File and Backup can be LSI Logic SAS or VMWare Paravirtual. 

Storage Controller Queue Depth: LSI Logic SAS SCSI Controller is not an optimal choice for a SQL Server implementation as it has only 32 Queue Depth which is insufficient for SQL Server I/O. On the other hand, the Paravirtualized SCSI Controller (PVSCSI) has 64 Queue Depth and can be configured up to 254. It is highly recommended to use multiple PVSCSI SCSI controllers for SQL Server and spread out the data, index, log and tempdb files across the controllers. 

As PVSCSI is not native to Windows, VMTools must be installed. An additional step will also be to create a Windows Registry key to reconfigure the Queue Depth for PVSCSI. Also note that the VMXNET3 network adaptor must present in the VM to take advantage of the I/O eco-system.

Following are two approaches to configuring Queue Depth for PVSCSI on a VM: 

Using CMD prompt:  Run the following command on the VM to create the required Registry key and the associated values.

REG ADD HKLM\SYSTEM\CurrentControlSet\services\pvscsi\Parameters\Device  
    /v DriverParameter /t REG_SZ /d "RequestRingPages=32,MaxQueueDepth=254"

Using PowerShell:

Get-Item -Path “HKLM:\SYSTEM\CurrentControlSet\services\pvscsi\Parameters\Device”

New-Item -Path “HKLM:\SYSTEM\CurrentControlSet\services\pvscsi\Parameters\Device”

Set-ItemProperty -Path “HKLM:\SYSTEM\CurrentControlSet\services\pvscsi\Parameters\Device” 
                 -name DriverParameter 
                 -value “RequestRingPages=32,MaxQueueDepth=254| Out-Null

Get-Item -Path “HKLM:\SYSTEM\CurrentControlSet\services\pvscsi\Parameters\Device”

NTFS Allocation Unit Size: The default NTFS Allocation Unit size is 4K for all volumes of up to 16TB. The SQL Server volumes or mount points which will hold SQL Server data files, log files, tempdb files must be formatted with 64K. For Windows OS, the application binary drive 4K is appropriate and does not require any changes. While formatting a drive, make sure that the “Quick Format” option has not been selected. 

To check the NTFS Allocation unit, run the following command:

  • C:\> fsutil fsinfo ntfsinfo e:
  • C:\> fsutil fsinfo ntfsinfo f:, and so on 

Using PowerShell:

$server ='YourSQL_Server'
Get-CimInstance -ComputerName $server -ClassName Win32_Volume `
 | where-object {$_.DriveLetter -gt ''} `
 | Sort-Object DriveLetter `
 | select DriveLetter, FileSystem, BootVolume, Blocksize

Partition Alignment: Starting with Windows 2008, all partition offsets are aligned to a 1MB (1024KB or 1048576 bytes) boundary. If the VMDK file is created using the vSphere vCenter, then the portion alignment issue is unlikely to exist.  However, partition alignment must be verified and if there is a misalignment, then the partition must be re-created and reformatted to align with a 1MB starting offset or a vendor recommended offset. 

To check partition alignment, run the any of the following command: 

  • C:\> wmic partition get Name, BlockSize, StartingOffset, Index
$server ='YourSQL_Server'
Get-CimInstance -ComputerName $server -ClassName Win32_DiskPartition `
 | sort-object Name `
 | select Name, BlockSize, StartingOffset, Index 

Thick Provisioned Eagerly Zeroed VMDK: For a heavy write-intensive SQL Server, it is recommended to use the “Thick Provisioned Eagerly Zeroed” VMDK disk. This will essentially eliminate the penalty of zeroing out the blocks at its first write. If the SQL Server workload is mostly read-oriented, then using “Thin Provisioned” will be sufficient and there will be no noticeable degradation in I/O performance.

Power Configuration setting on VM: The Power setting of a SQL Server VM must be in “High Performance” mode. Conserving the power of a VM leads to CPU throttling, which leads to a severe negative impact on application performance and I/O throughput.

Monitoring I/O Performance for a VM:The following are the four common I/O metrics used to measure performance:

  • GAVG (Guest Average Latency) - Total Latency, this is the amount of time it takes for an I/O to be completed, after it leaves the VM and until it is acknowledged back.
  • KAVG (Kernel Average Latency) - Time an I/O request spent waiting inside the vSphere storage stack. 
  • DAVG (Device Average Latency) - Latency coming from the physical hardware, HBA and Storage device.
  • QAVG (Queue Average Latency) - Time spent waiting in a queue inside the vSphere Storage Stack.

VMWare recommends that the DAVG, KAVG and GAVG metrics should not exceed more than 10 milliseconds for a sustained period of time and the QAVG should not exceed 1 millisecond. Take a look at this article https://virtunetsystems.com/how-does-queue-depth-affect-latency-and-iops-in-vmware/.

Taken from How does Queue Depth affect latency and IOPS in VMware?

 
Using ESXTOP to examine I/O performance

Recommendations in a Nutshell: As per VMware and Microsoft recommended best practices, a SQL Server on a VM should be configured as follows for optimal I/O performance: 

  • VMFS Data Store and VMDK file based on Random and Sequential I/O patterns
  • Paravirtualized SCSI Controller (PVSCSI) and Paravirtualized Network Adaptor (VMXNET3) for high throughput
  • Using multiple SCSI Controllers for the VM to allow more I/O to pass to the storage system
  • Reconfiguration of PVSCSI’s queue depth, up to 254
  • Use 64K NTFS Allocation Unit for data and log files on all volume and mount points without the “Quick Format” option
  • For high write-intensive SQL Servers, use “Think provisioned eager zero” type, otherwise use “Thin provisioned” VMDK
  • Ensure “Patrion Alignment” is accurate
  • Use “High Performance” power setting

Reference:

Tuesday, August 8, 2023

NUMA and soft-NUMA in SQL Server: To get additional I/O threads

Performance can be improved significantly if the SQL Server engine detects physical NUMA nodes on the Windows system. Along with Hardware NUMA, Microsoft also introduced soft-NUMA (software-based NUMA) architecture to create extra virtual NUMA nodes inside SQL OS. Starting from SQL Server 2016 (13.x), if the Database Engine detects more than eight “physical cores per NUMA node” or more than eight “socket”, soft-NUMA nodes will be created automatically. The creation of soft-NUMA nodes enables the SQL Server database engine to create more I/O threads to enhance the demanding SQL Server transactional workload.

The soft-NUMA creation process starts during the startup of the SQL Server service. By default, soft-NUMA creation is enabled in SQL Server and can be disabled or re-enabled by using the ALTER SERVER CONFIGURATION (Transact-SQL) statement with the SET SOFTNUMA argument. Changing the value of this setting requires a restart of the database engine to take effect.

Purpose of soft-NUMA: The purpose of soft-NUMA is to create an artificial grouping of CPU Cores where each group represents a soft-NUMA node. This creation of NUMA within the SQL Server will allow the database engine (SQL OS) to create extra “LAZY WRITER”, “LOG WRITER” and “RESOURCE MONITOR” threads per NUMA node. The SQL Server database engine will automatically decide and create soft-NUMA and the above threads based on existing NUMA and CPU cores.

Please note that soft-NUMA architecture will not create separate local memory nodes for every NUMA node. Instead, all the virtual soft-NUMA nodes will be using the same memory node where CPU group belongs to and which was originally exposed to the SQL Server. This means that there will be no local memory support for the soft-NUMA node.

Benefits of soft-NUMA:  Since SQL Server is a fully NUMA-aware application, having extra “LAZY WRITER”, “RESOURCE MONITOR” and “LOG WRITER” threads can provide significant performance improvement.  Additional benefits:

  1. Creates multiple “LAZY WRITER” threads, one per each NUMA node.
  2. Creates multiple “RESOURCE MONITOR” threads, one per each NUMA node.
  3. Might be able to create two or more “LOG WRITER” threads based on each NUMA node.
  4. Reduces “Non-Yielding Scheduler” errors and increases SQL Server responsiveness.
  5. Improves CHECKPOINT and I/O operations.
  6. Reduction of LATCH contention.

Less than 9 CPU Cores: Whether the SQL Server is installed directly on the hardware or running on a Virtual Machine, soft-NUMA creation requirements will be the same. If we run SQL Server on a VM with 8 CPU Cores we cannot have soft-NUMA, however, we can easily manipulate CPU Topology within the Hypervisor level to create or expose two vNUMA nodes to the Windows Server. SQL Server will treat this as a physical NUMA and will create two real NUMA nodes and associated I/O Threads.

Beware of creating multiple vNUMA with small amount of memory, it will not improve performance rather will introduce performance problem due to remote memory access. You can evaluate NUMA node memory usage by the following DMV:

SELECT * FROM sys.dm_os_memory_node_access_stats;

Query/DMV used to investigate: 

-- Hardware NUMA/Memory Node
SELECT @@servername AS 'sql_intance',
       'Orginal NUMA/Memory Node' AS 'Memory Node',
       memory_node_id,
       CONVERT(DECIMAL(18,2),(virtual_address_space_reserved_kb / 1024.0)) AS virtual_address_space_reserved_mb,
       CONVERT(DECIMAL(18,2),(virtual_address_space_committed_kb / 1024.0)) AS virtual_address_space_committed_mb,
       CONVERT(DECIMAL(18,2),(locked_page_allocations_kb / 1024.0)) AS locked_page_allocations_mb,
       CONVERT(DECIMAL(18,2),(pages_kb / 1024.0)) AS pages_mb,
       CONVERT(DECIMAL(18,2),(shared_memory_reserved_kb / 1024.0)) AS shared_memory_reserved_mb,
       CONVERT(DECIMAL(18,2),(shared_memory_committed_kb / 1024.0)) AS shared_memory_committed_mb,
       CONVERT(DECIMAL(18,2),(foreign_committed_kb / 1024.0)) AS foreign_committed_mb,
       CONVERT(DECIMAL(18,2),(target_kb / 1024.0)) AS target_mb
FROM sys.dm_os_memory_nodes
WHERE memory_node_id <> 64;

-- Hardware information after applying soft-NUMASELECT @@servername AS 'sql_intance',
       virtual_machine_type_desc,
       cpu_count,
       softnuma_configuration_desc,
       socket_count,
       cores_per_socket
FROM sys.dm_os_sys_info;

-- soft-NUMA nodesSELECT @@servername AS 'sql_intance',
       'Memory Node with soft-NUMA' AS 'Memory Node',
       node_id,
       node_state_desc,
       cpu_count
FROM sys.dm_os_nodes
WHERE node_state_desc = 'ONLINE';
-- SELECT @@servername AS 'sql_intance',
       spid,
       lastwaittype,
       cmd,
       status
FROM sys.sysprocesses
WHERE cmd IN ( 'LAZY WRITER', 'RESOURCE MONITOR', 'LOG WRITER' ) 

Following are few examples of NUMA and soft_NUMA creation: This configuration was conducted in a ESXi 8.0, Windows Server 2022 and SQL Server 2022 environment. Regardless of the environment, the final outcome will be the same.

1 vSocket, 8 vCores per vSocket, No CPU Topology applied 

Microsoft SQL Server 2022 (RTM) - 16.0.1000.6 (X64)
SQL Server detected 1 sockets with 8 cores per socket and 8 logical processors per socket, 8 total logical processors; using 8 logical processors based on SQL Server licensing.
CPU vectorization level(s) detected:  SSE SSE2 SSE3 SSSE3 SSE41 SSE42 AVX AVX2 POPCNT BMI1 BMI2 AVX512 (F CD BW DQ VL)
Node configuration: node 0: CPU mask: 0x00000000000000ff:0 Active CPU mask: 0x00000000000000ff:0.
Total Log Writer threads: 2, Node CPUs: 4, Nodes: 1, Log Writer threads per CPU: 1, Log Writer threads per Node: 2


 

1 vSocket, 9 vCores per vSocket, No CPU Topology applied :

Microsoft SQL Server 2022 (RTM) - 16.0.1000.6 (X64)
SQL Server detected 1 sockets with 9 cores per socket and 9 logical processors per socket, 9 total logical processors; using 9 logical processors based on SQL Server licensing.
Automatic soft-NUMA was enabled because SQL Server has detected hardware NUMA nodes with greater than 8 physical cores.
Node configuration: node 0: CPU mask: 0x000000000000001f:0 Active CPU mask: 0x000000000000001f:0.
Node configuration: node 1: CPU mask: 0x00000000000001e0:0 Active CPU mask: 0x00000000000001e0:0.
Total Log Writer threads: 2, Node CPUs: 2, Nodes: 2, Log Writer threads per CPU: 1, Log Writer threads per Node: 2




1 vSocket, 10 vCores per vSocket, No CPU Topology applied :

Microsoft SQL Server 2022 (RTM) - 16.0.1000.6 (X64)
SQL Server detected 1 sockets with 10 cores per socket and 10 logical processors per socket, 10 total logical processors; using 10 logical processors based on SQL Server licensing.
Automatic soft-NUMA was enabled because SQL Server has detected hardware NUMA nodes with greater than 8 physical cores.
CPU vectorization level(s) detected:  SSE SSE2 SSE3 SSSE3 SSE41 SSE42 AVX AVX2 POPCNT BMI1 BMI2 AVX512 (F CD BW DQ VL)
Node configuration: node 0: CPU mask: 0x000000000000001f:0 Active CPU mask: 0x000000000000001f:0.
Node configuration: node 1: CPU mask: 0x00000000000003e0:0 Active CPU mask: 0x00000000000003e0:0.
Total Log Writer threads: 2, Node CPUs: 2, Nodes: 2, Log Writer threads per CPU: 1, Log Writer threads per Node: 2



 

2 vSocket, 5 vCores per vSocket, No CPU Topology applied:

Microsoft SQL Server 2022 (RTM) - 16.0.1000.6 (X64)
SQL Server detected 2 sockets with 5 cores per socket and 5 logical processors per socket, 10 total logical processors; using 10 logical processors based on SQL Server licensing.
Automatic soft-NUMA was enabled because SQL Server has detected hardware NUMA nodes with greater than 8 physical cores.
CPU vectorization level(s) detected:  SSE SSE2 SSE3 SSSE3 SSE41 SSE42 AVX AVX2 POPCNT BMI1 BMI2 AVX512 (F CD BW DQ VL)
Node configuration: node 0: CPU mask: 0x000000000000001f:0 Active CPU mask: 0x000000000000001f:0.
Node configuration: node 1: CPU mask: 0x00000000000003e0:0 Active CPU mask: 0x00000000000003e0:0.
Total Log Writer threads: 2, Node CPUs: 2, Nodes: 2, Log Writer threads per CPU: 1, Log Writer threads per Node: 2



 

2 vSocket, 5 vCores per vSocket, 2 vNUMA, CPU Topology applied :

Microsoft SQL Server 2022 (RTM) - 16.0.1000.6 (X64)
SQL Server detected 2 sockets with 5 cores per socket and 5 logical processors per socket, 10 total logical processors; using 10 logical processors based on SQL Server licensing.
CPU vectorization level(s) detected:  SSE SSE2 SSE3 SSSE3 SSE41 SSE42 AVX AVX2 POPCNT BMI1 BMI2 AVX512 (F CD BW DQ VL)
Node configuration: node 0: CPU mask: 0x000000000000001f:0 Active CPU mask: 0x000000000000001f:0.
Node configuration: node 1: CPU mask: 0x00000000000003e0:0 Active CPU mask: 0x00000000000003e0:0.
Total Log Writer threads: 2, Node CPUs: 2, Nodes: 2, Log Writer threads per CPU: 1, Log Writer threads per Node: 2

2 vSocket, 4 vCores per vSocket, 2 vNUMA, CPU Topology applied :

Microsoft SQL Server 2022 (RTM) - 16.0.1000.6 (X64)
SQL Server detected 2 sockets with 4 cores per socket and 4 logical processors per socket, 8 total logical processors; using 8 logical processors based on SQL Server licensing.
CPU vectorization level(s) detected:  SSE SSE2 SSE3 SSSE3 SSE41 SSE42 AVX AVX2 POPCNT BMI1 BMI2 AVX512 (F CD BW DQ VL)
Node configuration: node 0: CPU mask: 0x000000000000000f:0 Active CPU mask: 0x000000000000000f:0.
Node configuration: node 1: CPU mask: 0x00000000000000f0:0 Active CPU mask: 0x00000000000000f0:0.
Total Log Writer threads: 2, Node CPUs: 2, Nodes: 2, Log Writer threads per CPU: 1, Log Writer threads per Node: 2


References:

Soft-NUMA (SQL Server):
https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/soft-numa-sql-server?view=sql-server-ver16#automatic-soft-numa

How It Works: Soft NUMA, I/O Completion Thread, Lazy Writer, Workers and Memory Nodes:
https://techcommunity.microsoft.com/t5/sql-server-support-blog/how-it-works-soft-numa-i-o-completion-thread-lazy-writer-workers/ba-p/316044

Saturday, August 5, 2023

Paravirtualized Network Adaptor: Changing E1000e to VMXNET3:

A very common and misleading practice made by many is to accept the default value during installation or configuration. This practice may be acceptable or even suitable in some scenarios, but it may not be optimal for a targeted workload since the default values can eventually cause widespread performance issues.

While creating a Virtual Machine in vSphere ESXi, there are many mandatory values come as default for CPU, Memory, Network Card, Socket, I/O controller and so on that need to be decided on. The VMware architect sets most of the required hardware resources at the bare minimum values necessary to create a Virtual Machine regardless of the guest OS. Should we accept these defaults? Probably not. However, many administrators and associates continue to accept these bare minimum hardware default values without realizing the performance consequences.

Network Adaptor: Currently there are three types of Network Adaptors available and the E1000e is the default. E1000e is an emulated version of the “Intel 82574 Gigabit Ethernet NIC” and the guest OS will recognize it as “Intel(R) 82574L Gigabit Network Connection”. If this adaptor is selected for the guest OS (Windows, Linux), the required driver for this adaptor is already built-in within the OS and has no interaction with VMWare Tools driver.

A few disadvantages of E1000e:

1.     It is not Paravirtualized, thus VM performance is not guaranteed.

2.     Only supports basic network connectivity.

3.     Does not support RSS (Receive Side Scaling).

4.     Uses far more CPU on the hypervisor.

5.     May cause memory leakage and high CPU on the guest OS.

6.     Network packets drop.

VMXNET3: It is a Paravirtualized Network Adaptor developed by VMWare and it is recommended for VM to gain substantial performance benefit from the Virtual Machine. To take advantage of this adaptor, VMWare Tools need to be installed on the Virtual Machine.

Changing E1000e to VMXNET3: There are several ways to change the Network Adaptor type from E1000e to VMXNET3. Before doing this, you should record all network configurations from the existing network adaptor. If you would like to keep the MAC address of the of E1000e (existing network card), please write it down beforehand.

I found that using PowerCLI is the easiest and safest way to change the network adaptor type from E100e to VMXNET3.

Method 1: Using PowerCLI to change NIC type while preserving the original MAC address of E1000e:

·       Note down the network configuration, including values such as IP, subnet, Gateway, DNS, etc. and take a snapshot of the VM.

·       Turn off the VM.

·       Connect to the Esxi Server, my Esxi server IP is 192.168.0.22:

 Connect-VIServer -Server 192.168.0.22

·       Check the Network Type:

Get-VM win01| get-networkadapter

·       Change the NIC type from E1000e to VMXNET3:

get-vm win01 | Get-NetworkAdapter | set-networkadapter -type vmxnet3 -confirm:$false

·       Turn on the VM.

·       In a Windows VM, open the device manager (devmgmt.msc) and enable “Show hidden devices” under the view menu.

·       Uninstall the “Intel(R) 82574L Gigabit Network Connection”.

Using PowerCLI to change Network Adaptor:



Method 2: By editing the vmx file:

·       Similar to above, note down the network configuration details.

·       Take a snapshot of the VM.

·       Turn off the VM.

·       Open the datastore where the VM resides.

·       Right click and download the vmx file to local desktop.

·       Edit the vmx file and replace the adaptor type to vmxnext3 as follows: 

ethernet0.virtualDev = "vmxnet3"

·       Add the following:

ethernet0.CheckMACAddress = "FALSE"

·       Save the vmx file, then upload the edited version and replace the original version.

·       Turn on the VM.

·       In a Windows VM, open the device manager (devmgmt.msc) and enable “Show hidden devices” under the view menu.

  • Uninstall the “Intel(R) 82574L Gigabit Network Connection”.

Reference:

Choosing a network adapter for your virtual machine (1001805):

https://kb.vmware.com/s/article/1001805

Understanding full virtualization, paravirtualization, and hardware assist:
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/VMware_paravirtualization.pdf

VMXNET3 vs E1000E and E1000:

https://rickardnobel.se/vmxnet3-vs-e1000e-and-e1000-part-1/

https://rickardnobel.se/vmxnet3-vs-e1000e-and-e1000-part-2/